Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 298, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 91, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                File "pyarrow/table.pxi", line 5317, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: Unable to merge: Field json has incompatible types: struct<api_list: list<item: struct<code: string, convert_code: string, description: string, method: string, name: string, optional_parameters: list<item: struct<default: string, description: string, name: string, type: string>>, required_parameters: list<item: null>, test_endpoint: string, url: string>>, home_url: string, host: string, name: string, pricing: string, product_id: string, score: struct<__typename: string, avgLatency: int64, avgServiceLevel: int64, avgSuccessRate: int64, popularityScore: double>, title: string, tool_description: string, tool_name: string> vs struct<api_list: list<item: struct<code: string, convert_code: string, description: string, method: string, name: string, optional_parameters: list<item: struct<default: string, description: string, name: string, type: string>>, required_parameters: list<item: struct<default: string, description: string, name: string, type: string>>, schema: struct<>, statuscode: int64, test_endpoint: string, url: string>>, home_url: string, host: string, name: string, pricing: string, product_id: string, score: struct<__typename: string, avgLatency: int64, avgServiceLevel: int64, avgSuccessRate: int64, popularityScore: int64>, title: string, tool_description: string, tool_name: string>: Unable to merge: Field score has incompatible types: struct<__typename: string, avgLatency: int64, avgServiceLevel: int64, avgSuccessRate: int64, popularityScore: double> vs struct<__typename: string, avgLatency: int64, avgServiceLevel: int64, avgSuccessRate: int64, popularityScore: int64>: Unable to merge: Field popularityScore has incompatible types: double vs int64
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 303, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

This is the tool set of we crawled in Apr 2404 in the ToolBench format and used to train the StableToolBench-Mirror API model. To use this tool set, download the .tar.gz file and unzip it with tar xzvf toolenv2404_filtered.tar.gz and copy the path of the tools folder in the running script of ToolBench. For example,

export TOOLBENCH_KEY=""
export OPENAI_KEY=""
export OPENAI_API_BASE="" 
export PYTHONPATH=./
export GPT_MODEL="gpt-3.5-turbo-16k"
export SERVICE_URL="http://localhost:8080/virtual"
export OUTPUT_DIR="data/answer/virtual_chatgpt_cot"
group=G1_instruction
mkdir -p $OUTPUT_DIR; mkdir -p $OUTPUT_DIR/$group

python toolbench/inference/qa_pipeline_multithread.py \
    --tool_root_dir toolenv/toolenv2404_filtered \ # This is the place where you fill the path to the tools folder.
    --backbone_model chatgpt_function \
    --openai_key $OPENAI_KEY \
    --max_observation_length 1024 \
    --method CoT@1 \
    --input_query_file solvable_queries/test_instruction/${group}.json \
    --output_answer_file $OUTPUT_DIR/$group \
    --toolbench_key $TOOLBENCH_KEY \
    --num_thread 1
Downloads last month
34