nehcgs commited on
Commit
9b49609
·
verified ·
1 Parent(s): 8736e6d

Upload folder using huggingface_hub

Browse files
Arch-Function-3B-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac9a5a3d3ec8188b711c4c2601458cb11dd40ac47f435ddf2ac1adad06163dc2
3
  size 1274755488
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f66f014a1a9c6b74fa032e92f9cc0c1c3be42d5105df5b3f31b0d70a988608c7
3
  size 1274755488
Arch-Function-3B-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f3ebb8209e432f57e9d7513d3bb60cfb523414d5ec68d316afb5c765a97c9b92
3
  size 1707391392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be3cbad0651c5f148d8332bb570ad20bda79c7ba9e7b4da6ff85a9009bb6811
3
  size 1707391392
Arch-Function-3B-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:badb6a19390fe7abcc0325b435e6ddf7ed0cd30d46967987a896bfaefe3ee6e8
3
  size 1590475168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0a2d7ab7d524180b807563533841dd45ddf655c2cb3e667261af69ee187969e
3
  size 1590475168
Arch-Function-3B-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c71b8fdd436c7a2d1f00ed4322a28b977c292b3dd6a798c0400e40d0ed14c03
3
  size 1454356896
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:066c42e06aa2b625a279e7fdf4f41d469e5734005590d4c74525c00307031762
3
  size 1454356896
Arch-Function-3B-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7172964e86c7ffa28c7931f1bfb846e4c9d304b74439f14c63de0e918d711f0b
3
  size 1929902496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:681134d4e73e20bcbd14e5870fdde27ea5e9f9ca98c7c11b5d8886387dfc0684
3
  size 1929902496
Arch-Function-3B-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3e80ad78953761dd86db7a7c2256ddd528b0c23f95a1e31be1c8bc93e3bf41fb
3
  size 1834383776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e638604b973dec2b54f38d2f31594cd37944fe77411c11e9bb266a5ea691b4e
3
  size 1834383776
Arch-Function-3B-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0dd4e6a39763d2175c0b3a3c64f96b2c391f9b5c3ebc01c1fa26493131173095
3
  size 2224814496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e81a01993cb5e6e14786b7b2aa2003e50b8e6a8b40c8fe1d5ad7c95e65128907
3
  size 2224814496
Arch-Function-3B-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:294d4570d2cedc163c4e31d1385d1f8208bd8441b2fb281cdc58bc03c3c1f199
3
  size 2169665952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e7248944a554e0abe9b79fdbbe68c56a831c799d73171c6797b4022f0c653d3
3
  size 2169665952
Arch-Function-3B-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b4ae2627be1de16a42e01103fbd399028394fc3e28bc2cecaf97eb84c8dded3
3
  size 2538158496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62149b0a091bee66646f8f651f397aa8cf2aa092f39c280dd519e94af49b8991
3
  size 2538158496
Arch-Function-3B.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:74f797d8c23e83f887de4d5cfd13db3d3de4bdb05bd5345b674e014f51f53c0b
3
  size 6178316704
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5db7b9f6d9a1570599307b90ec9271f228c736c688fab4083e31369842dd78f4
3
  size 6178316704
README.md CHANGED
@@ -2,16 +2,16 @@
2
  license: other
3
  license_name: katanemo-research
4
  license_link: >-
5
- https://huggingface.co/katanemolabs/Arch-Function-1.5B/blob/main/LICENSE
6
  base_model:
7
- - Qwen/Qwen2.5-1.5B-Instruct
8
  language:
9
  - en
10
  pipeline_tag: text-generation
11
  library_name: transformers
12
  ---
13
 
14
- # katanemo/Arch-Function-1.5B
15
 
16
  ## Overview
17
  The Katanemo Arch-Function collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for **function calling** tasks. The models are designed to understand complex function signatures, identify required parameters, and produce accurate function call outputs based on natural language prompts. Achieving performance on par with GPT-4, these models set a new benchmark in the domain of function-oriented tasks, making them suitable for scenarios where automated API interaction and function execution is crucial.
@@ -54,7 +54,7 @@ Katanemo Arch-Function collection is built on top of the [Qwen 2.5](https://hugg
54
 
55
 
56
  ## Performance Benchmarks
57
- We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard). For each model family, we select the one with the highest rank. The results (as of Oct 21st, 2024) are shwon below:
58
 
59
  <table>
60
  <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
@@ -84,27 +84,16 @@ We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Lead
84
  <td>63.41%</td>
85
  <td>82.93%</td>
86
  </tr>
87
- <tr style="text-align: center; vertical-align: middle;">
88
- <td>2</td>
89
- <td>Functionary-Medium-v3.1 (FC)</td>
90
- <td>62.02%</td>
91
- <td>89.52%</td>
92
- <td>89.77%</td>
93
- <td>73.48%</td>
94
- <td>23.50%</td>
95
- <td>70.73%</td>
96
- <td>73.32%</td>
97
- </tr>
98
- <tr style="text-align: center; vertical-align: middle;">
99
- <td>5</td>
100
- <td>ToolACE-8B (FC)</td>
101
- <td>60.44%</td>
102
- <td>87.06%</td>
103
- <td>89.52%</td>
104
- <td>74.99%</td>
105
- <td>17.38%</td>
106
- <td>80.49%</td>
107
- <td>85.71%</td>
108
  </tr>
109
  <tr style="text-align: center; vertical-align: middle;">
110
  <td>6</td>
@@ -117,28 +106,6 @@ We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Lead
117
  <td>73.17%</td>
118
  <td>74.60%</td>
119
  </tr>
120
- <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
121
- <td> </td>
122
- <td>Arch-Function-7B</td>
123
- <td>58.44%</td>
124
- <td>85.58%</td>
125
- <td>88.14%</td>
126
- <td>69.08%</td>
127
- <td>20.50%</td>
128
- <td>92.68%</td>
129
- <td>74.05%</td>
130
- </tr>
131
- <tr style="text-align: center; vertical-align: middle; ">
132
- <td>8</td>
133
- <td>xLAM-8x22b-r (FC)</td>
134
- <td>57.99%</td>
135
- <td>88.15%</td>
136
- <td>90.11%</td>
137
- <td>71.97%</td>
138
- <td>14.50%</td>
139
- <td>85.37%</td>
140
- <td>67.29%</td>
141
- </tr>
142
  <tr style="text-align: center; vertical-align: middle; ">
143
  <td>9</td>
144
  <td>Gemini-1.5-Flash-002 (Prompt)</td>
@@ -150,16 +117,16 @@ We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Lead
150
  <td>85.37%</td>
151
  <td>78.54%</td>
152
  </tr>
153
- <tr style="text-align: center; vertical-align: middle; ">
154
- <td>10</td>
155
- <td>Hammer2.0-7b (FC)</td>
156
  <td>57.69%</td>
157
- <td>90.27%</td>
158
- <td>89.25%</td>
159
- <td>69.79%</td>
160
- <td>14.75%</td>
161
- <td>95.12%</td>
162
- <td>68.46%</td>
163
  </tr>
164
  <tr style="text-align: center; vertical-align: middle; ">
165
  <td>12</td>
@@ -183,50 +150,16 @@ We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Lead
183
  <td>75.61%</td>
184
  <td>49.44%</td>
185
  </tr>
186
- <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
187
- <td> </td>
188
- <td>Arch-Function-3B</td>
189
- <td>56.57%</td>
190
- <td>83.62%</td>
191
- <td>85.36%</td>
192
- <td>66.90%</td>
193
- <td>19.50%</td>
194
- <td>97.56%</td>
195
- <td>70.99%</td>
196
- </tr>
197
- </tr>
198
  <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
199
  <td> </td>
200
  <td>Arch-Function-1.5B</td>
201
- <td>54.52%</td>
202
- <td>80.31%</td>
203
- <td>82.04%</td>
204
- <td>66.19%</td>
205
- <td>17.25%</td>
206
- <td>97.56%</td>
207
- <td>69.95%</td>
208
- </tr>
209
- <tr style="text-align: center; vertical-align: middle; ">
210
- <td>19</td>
211
- <td>xLAM-7b-r (FC)</td>
212
- <td>54.41%</td>
213
- <td>81.40%</td>
214
- <td>83.46%</td>
215
- <td>67.88%</td>
216
- <td>14.50%</td>
217
- <td>97.56%</td>
218
- <td>64.05%</td>
219
- </tr>
220
- <tr style="text-align: center; vertical-align: middle; ">
221
- <td>20</td>
222
- <td>Qwen2.5-7B-Instruct (Prompt)</td>
223
- <td>54.27%</td>
224
- <td>85.79%</td>
225
- <td>88.13%</td>
226
- <td>65.97%</td>
227
- <td>11.25%</td>
228
- <td>92.68%</td>
229
- <td>64.95%</td>
230
  </tr>
231
  <tr style="text-align: center; vertical-align: middle; ">
232
  <td>21</td>
@@ -254,7 +187,7 @@ We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Lead
254
 
255
 
256
  # Requirements
257
- The code of Arch-Function-1.5B has been in the Hugging Face `transformers` library and we advise you to install latest version:
258
  ```bash
259
  pip install transformers>=4.37.0
260
  ```
@@ -270,7 +203,7 @@ import json
270
  from typing import Any, Dict, List
271
  from transformers import AutoModelForCausalLM, AutoTokenizer
272
 
273
- model_name = "katanemo/Arch-Function-1.5B"
274
  model = AutoModelForCausalLM.from_pretrained(
275
  model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
276
  )
@@ -409,4 +342,4 @@ The current temperature in Seattle is 62 degrees in Fahrenheit.
409
 
410
 
411
  # License
412
- Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemolabs/Arch-Function-1.5B/blob/main/LICENSE).
 
2
  license: other
3
  license_name: katanemo-research
4
  license_link: >-
5
+ https://huggingface.co/katanemolabs/Arch-Function-3B.gguf/blob/main/LICENSE
6
  base_model:
7
+ - katanemo/Arch-Function-3B
8
  language:
9
  - en
10
  pipeline_tag: text-generation
11
  library_name: transformers
12
  ---
13
 
14
+ # katanemo/Arch-Function-3B
15
 
16
  ## Overview
17
  The Katanemo Arch-Function collection of large language models (LLMs) is a collection state-of-the-art (SOTA) LLMs specifically designed for **function calling** tasks. The models are designed to understand complex function signatures, identify required parameters, and produce accurate function call outputs based on natural language prompts. Achieving performance on par with GPT-4, these models set a new benchmark in the domain of function-oriented tasks, making them suitable for scenarios where automated API interaction and function execution is crucial.
 
54
 
55
 
56
  ## Performance Benchmarks
57
+ We evaluate Katanemo Arch-Function series on the [Berkeley Function-Calling Leaderboard (BFCL)](https://gorilla.cs.berkeley.edu/leaderboard.html#leaderboard). We compare with commonly-used models and the results (as of Oct 21st, 2024) are shwon below. For each model family, we select the one with the highest rank.
58
 
59
  <table>
60
  <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
 
84
  <td>63.41%</td>
85
  <td>82.93%</td>
86
  </tr>
87
+ <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
88
+ <td> </td>
89
+ <td>Arch-Function-7B</td>
90
+ <td>59.62%</td>
91
+ <td>86.83%</td>
92
+ <td>88.07%</td>
93
+ <td>71.57%</td>
94
+ <td>21.00%</td>
95
+ <td>95.12%</td>
96
+ <td>73.63%</td>
 
 
 
 
 
 
 
 
 
 
 
97
  </tr>
98
  <tr style="text-align: center; vertical-align: middle;">
99
  <td>6</td>
 
106
  <td>73.17%</td>
107
  <td>74.60%</td>
108
  </tr>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  <tr style="text-align: center; vertical-align: middle; ">
110
  <td>9</td>
111
  <td>Gemini-1.5-Flash-002 (Prompt)</td>
 
117
  <td>85.37%</td>
118
  <td>78.54%</td>
119
  </tr>
120
+ <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
121
+ <td> </td>
122
+ <td>Arch-Function-3B</td>
123
  <td>57.69%</td>
124
+ <td>85.19%</td>
125
+ <td>86.18%</td>
126
+ <td>71.21%</td>
127
+ <td>17.50%</td>
128
+ <td>90.24%</td>
129
+ <td>72.88%</td>
130
  </tr>
131
  <tr style="text-align: center; vertical-align: middle; ">
132
  <td>12</td>
 
150
  <td>75.61%</td>
151
  <td>49.44%</td>
152
  </tr>
 
 
 
 
 
 
 
 
 
 
 
 
153
  <tr style="text-align: center; vertical-align: middle; font-weight: bold;">
154
  <td> </td>
155
  <td>Arch-Function-1.5B</td>
156
+ <td>56.20%</td>
157
+ <td>84.40%</td>
158
+ <td>83.96%</td>
159
+ <td>69.36%</td>
160
+ <td>15.88%</td>
161
+ <td>87.80%</td>
162
+ <td>74.39%</td>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
163
  </tr>
164
  <tr style="text-align: center; vertical-align: middle; ">
165
  <td>21</td>
 
187
 
188
 
189
  # Requirements
190
+ The code of Arch-Function-3B has been in the Hugging Face `transformers` library and we advise you to install latest version:
191
  ```bash
192
  pip install transformers>=4.37.0
193
  ```
 
203
  from typing import Any, Dict, List
204
  from transformers import AutoModelForCausalLM, AutoTokenizer
205
 
206
+ model_name = "katanemo/Arch-Function-3B"
207
  model = AutoModelForCausalLM.from_pretrained(
208
  model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
209
  )
 
342
 
343
 
344
  # License
345
+ Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemolabs/Arch-Function-3B.gguf/blob/main/LICENSE).