ProximileAdmin commited on
Commit
26de573
·
verified ·
1 Parent(s): 5a349a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -288,13 +288,21 @@ If the user request does not necessitate a function call, simply respond to the
288
  # Generate final assistant response
289
  final_response = chat_completion(lora_model, tokenizer, messages)
290
  print(f"Assistant (with tool data): {final_response}")
 
 
 
 
 
 
 
 
291
  ```
292
 
293
  ## Limitations
294
 
295
- - LLaDA's diffusion-based generation is different from autoregressive models and may behave differently in certain contexts
296
  - The model may still hallucinate or generate incorrect tool call formats
297
- - Performance may vary depending on the specific tool calling task
298
 
299
  ## Citation
300
 
 
288
  # Generate final assistant response
289
  final_response = chat_completion(lora_model, tokenizer, messages)
290
  print(f"Assistant (with tool data): {final_response}")
291
+
292
+ # Assistant: [{"name": "get_weather", "parameters": {"location": "New York", "unit": "fahrenheit"}}]
293
+ # Assistant (with tool data): The current weather in New York is as follows:
294
+ # - Temperature: 72°F
295
+ # - Weather Condition: Partly Cloudy
296
+ # - Humidity: 65%
297
+ # - Wind Speed: 8 miles per hour
298
+ # - Wind Direction: Northeast
299
  ```
300
 
301
  ## Limitations
302
 
303
+ - LLaDA's diffusion-based generation is different from standard LLMs and may behave differently in certain contexts
304
  - The model may still hallucinate or generate incorrect tool call formats
305
+ - The format of the tool call must precisely match what is shown in the example (which is a modified version of [the official llama 3.1 format](https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/))
306
 
307
  ## Citation
308