text
stringlengths
63
77.2k
metadata
dict
# Cosmos Diffusion-based World Foundation Models: NeMo Framework User Guide Learn how to [run inference](#inference) with Cosmos Diffusion-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide. ## Model Support Matrix The NeMo Framework supports the following Cosmos Diffusion models. Review the available models and their compute requirements for post-tuning and inference to determine the best model for your use case. | Model Name | Model Status | Compute Requirements for Inference | Multi-GPU Support |----------------------------------------------|------------------|------------------------------------------|---------| | Cosmos-1.0-Diffusion-7B-Text2World | **Supported** | 1 NVIDIA GPU* | **Supported** | | Cosmos-1.0-Diffusion-14B-Text2World | **Supported** | 1 NVIDIA GPU* | **Supported** | | Cosmos-1.0-Diffusion-7B-Video2World | **Supported** | 1 NVIDIA GPU* | **Supported** | | Cosmos-1.0-Diffusion-14B-Video2WorldB | **Supported** | 1 NVIDIA GPU* | **Supported** | **\*** `H100-80GB` or `A100-80GB` GPUs are recommended. ## Post-Trained Model Inference Support Matrix Cosmos Diffusion-based WFMs can also be post-trained for a variety of Physical AI tasks and used for inference. Review the following table for a list of available Physical AI post-training tasks: | Post-training Task | Inference Support Status | |-------------------------|--------------------| | General post-training | **Supported** | | Instruction control | **Coming Soon** | | Action control | **Coming Soon** | | Camera control | **Coming Soon** | | Multi-view generation | **Coming Soon** | | Multi-view generation with vehicle trajectory control | **Coming Soon** | ## Prerequisites ### 1. Review General Requirements - System Configuration - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix. - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot). - Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference. - Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking. ### 2. Clone the Cosmos Repository ```bash git clone [email protected]:NVIDIA/Cosmos.git ``` ### 3. Start the Container The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Diffusion models. Run the following command to download and start the container: ```bash docker run --ipc=host -it --gpus=all \ -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \ nvcr.io/nvidia/nemo:cosmos.1.0.1 bash ``` ### 4. Download Checkpoints To help you get started, we've provided a [download script](../download_diffusion_nemo.py) to get the Cosmos Diffusion Text2World and Video2World checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework. 1. Set the following environment variables: ```bash # You must set HF_HOME before running this script. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" ``` 2. Run the following command to download the models: ```bash cd /workspace/Cosmos python cosmos1/models/diffusion/nemo/download_diffusion_nemo.py ``` ## Run Inference Running inference with Cosmos Diffusion Text2World models lets you generate a video conditioned on a text prompt. With the Video2World models, you can generate a video conditioned on a text prompt as well as on an image or video. Note that when supplying an image or video for conditioning the following requirements must be met: - **Video**: The video must be less than 9 frames long - **Image**: The image must be either PNG or JPEG format and have one of the following extensions: `.png`, `.jpg`, or `.jpeg` Our inference script enables accelerated world generation with context parallel. We use context parallelism to split the diffusion process across multiple GPUs, providing near-linear scaling efficiency. Our diffusion pipeline also allows the user to set a variety of hyperparameters including the random seed, classifier-free guidance scale, negative prompt, video resolution, and video fps. General post-training is essentially a continuation of pre-training. To perform inference with models that have been post-trained with general post-training, you can set the `subject_name` parameter to the subject the model was post-trained on. The `prompt` and `conditioned_image_or_video_path` parameters are then used to provide the setting and describe the events in the generated world. The final prompt will be "A video of sks `{subject_name}`. `{prompt}`". We can also use [inference/general.py](./general.py) or [inference/video2world.py](./video2world.py) to perform inference on the base models since the model architectures are the same as the general post-trained models. We also provide the option to upsample the `prompt` and make it more detailed. This can improve the quality of the generated world. Note that for Video2World generation, currently the LLM only looks at your text prompt to upsample the initial prompt, and it does not consider your input image/video for prompt upsampling. We will add text + image processing for prompt upsampling in the near future. ### Run the Inference Script with Base Models #### Text2World Complete the following steps to generate a new output video of a robot cooking. 1. Set the following environment variables: ```bash # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference. export NUM_DEVICES=1 export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1))) # Prompt describing world scene and actions taken by subject (if provided). export PROMPT="The teal robot is cooking food in a kitchen. Steam rises from a simmering pot as the robot chops vegetables on a worn wooden cutting board. Copper pans hang from an overhead rack, catching glints of afternoon light, while a well-loved cast iron skillet sits on the stovetop next to scattered measuring spoons and a half-empty bottle of olive oil." ``` 2. Run the following command: ```bash NVTE_FUSED_ATTN=0 \ torchrun --nproc_per_node=$NUM_DEVICES cosmos1/models/diffusion/nemo/inference/general.py \ --model Cosmos-1.0-Diffusion-7B-Text2World \ --cp_size $NUM_DEVICES \ --num_devices $NUM_DEVICES \ --video_save_path "Cosmos-1.0-Diffusion-7B-Text2World.mp4" \ --guidance 7 \ --seed 1 \ --prompt "$PROMPT" \ --enable_prompt_upsampler ``` #### Video2World Complete the following steps to generate a new output video conditioned on an input video and a text prompt using the Video2World models. 1. Set the following environment variables: ```bash # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference. export NUM_DEVICES=1 export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1))) # Prompt describing world scene and actions taken by subject (if provided). export PROMPT="<Supply a prompt here>" export CONDITIONED_IMAGE_OR_VIDEO="<Path to conditioned image or video>" ``` 2. Run the following command: ```bash NVTE_FUSED_ATTN=0 \ torchrun --nproc_per_node=$NUM_DEVICES cosmos1/models/diffusion/nemo/inference/video2world.py \ --model Cosmos-1.0-Diffusion-7B-Video2World \ --cp_size $NUM_DEVICES \ --num_devices $NUM_DEVICES \ --video_save_path "Cosmos-1.0-Diffusion-7B-Video2World.mp4" \ --guidance 7 \ --seed 1 \ --prompt "$PROMPT" \ --conditioned_image_or_video_path "$CONDITIONED_IMAGE_OR_VIDEO" \ --num_input_frames 9 \ --enable_prompt_upsampler ``` ### Run the Inference Script with Post-trained Models Create a post-trained model first, by using the instructions [here](../post_training/README.md) Then complete the following steps to generate a new output video from this model. #### Text2World 1. Set the following environment variables: ```bash # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Inference with post-trained model. Find post-trained model under nemo_experiments. Example path: export NEMO_CHECKPOINT=nemo_experiments/cosmos_diffusion_7b_text2world_finetune/default/2024-12-17_01-28-03/checkpoints/epoch=39-step=199/weights # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference. export NUM_DEVICES=1 export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1))) # Prompt describing world scene and actions taken by subject (if provided). export PROMPT="The teal robot is cooking food in a kitchen. Steam rises from a simmering pot as the robot chops vegetables on a worn wooden cutting board. Copper pans hang from an overhead rack, catching glints of afternoon light, while a well-loved cast iron skillet sits on the stovetop next to scattered measuring spoons and a half-empty bottle of olive oil." ``` 2. Run the following command: ```bash NVTE_FUSED_ATTN=0 \ torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/inference/general.py \ --model Cosmos-1.0-Diffusion-7B-Text2World \ --nemo_checkpoint "$NEMO_CHECKPOINT" \ --cp_size $NUM_DEVICES \ --num_devices $NUM_DEVICES \ --video_save_path "Cosmos-1.0-Diffusion-7B-Text2World.mp4" \ --guidance 7 \ --seed 1 \ --prompt "$PROMPT" \ --subject_name "teal robot" \ --enable_prompt_upsampler ``` ##### Example Output The following output is an example video generated from the post-trained model using [`general.py`](./inference/general.py): <video src="https://github.com/user-attachments/assets/a2b5bc7e-4e0a-4514-a04e-919281cee6fa"> Your browser does not support the video tag. </video> ##### Configuration Options The following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements. The model inference hyperparameters listed below have the same functionality as in [Cosmos Diffusion Common Parameters](cosmos1/models/diffusion/README.md#parameters). | Parameter | Description | Default | |--------------------------------|---------------------------------------------------------------------------------|---------| | `--model` | Name of Cosmos Text2World Diffusion model to use for inference. | `Cosmos-1.0-Diffusion-7B-Text2World` | | `--prompt` | Prompt which the sampled video is conditioned on. Tries to generate what is mentioned in the prompt. | *None* (user must provide) | | `--negative_prompt` | Negative prompt for improved quality | "The video captures a series of frames showing ugly scenes..." | | `--subject_name` | Name of the subject the model was post-trained on. This can be left empty for base model inference. | *None* | | `--guidance` | A control mechanism that determines how closely the model follows specified conditions (prompt) during the generation process. We recommend starting with a guidance of 7 and increasing it later if necessary. | 7 | | `--sampler` | Sampling method used for generation. Only supports **RES** sampler from [this paper](https://arxiv.org/pdf/2308.02157). | `RES` | | `--video_save_path` | Location to save generated videos. | `Cosmos-1.0-Diffusion-7B-Text2World.mp4` | | `--fps` | Frames-per-second of generated video. Cosmos Diffusion models generate videos at 24 FPS by default. | 24 | | `--height` | Height of the generated video. Set to 704 pixels by default, which is the largest supported video height for Cosmos Diffusion. | 704 | | `--width` | Width of the generated video. Set to 1280 pixels by default, which is the largest supported video width for Cosmos Diffusion. | 1280 | | `--seed` | Random seed for generating initial noise sample. Changing this will create a different video for the same prompt. Keep the seed fixed to maintain deterministic video generations. | 1 | | `--num_devices` | [1–8] Number of GPUs to use in parallel for inference. | 8 | | `--cp_size` | [1–8] Number of context parallel ranks to spawn for parallelized inference. Must be equal to `num_devices`. | 8 | #### Video2World 1. Set the following environment variables: ```bash # HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Inference with post-trained model. Find post-trained model under nemo_experiments. Example path: export NEMO_CHECKPOINT=nemo_experiments/cosmos_diffusion_7b_video2world_finetune/default/2025-02-03_11-57-33/checkpoints/epoch=39-step=199/weights # Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference. export NUM_DEVICES=1 export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1))) export PROMPT="<Supply a prompt here>" export CONDITIONED_IMAGE_OR_VIDEO="<Path to conditioned image or video>" ``` 2. Run the following command: ```bash NVTE_FUSED_ATTN=0 \ torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/inference/video2world.py \ --model Cosmos-1.0-Diffusion-7B-Video2World \ --nemo_checkpoint "$NEMO_CHECKPOINT" \ --cp_size $NUM_DEVICES \ --num_devices $NUM_DEVICES \ --video_save_path "Cosmos-1.0-Diffusion-7B-Video2World.mp4" \ --guidance 7 \ --seed 1 \ --prompt "$PROMPT" \ --conditioned_image_or_video_path "$CONDITIONED_IMAGE_OR_VIDEO" \ --subject_name "teal robot" \ --enable_prompt_upsampler ##### Configuration Options The following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements. The model inference hyperparameters listed below have the same functionality as in [Cosmos Diffusion Common Parameters](cosmos1/models/diffusion/README.md#parameters). | Parameter | Description | Default | |--------------------------------|---------------------------------------------------------------------------------|---------| | `--model` | Name of Cosmos Video2World Diffusion model to use for inference. | `Cosmos-1.0-Diffusion-7B-Video2World` | | `--prompt` | Prompt which the sampled video is conditioned on. Tries to generate what is mentioned in the prompt. | *None* (user must provide) | | `--conditioned_image_or_video_path` | Input video used for conditioning generations. | *None* (user must provide) | | `--negative_prompt` | Negative prompt for improved quality | "The video captures a series of frames showing ugly scenes..." | | `--subject_name` | Name of the subject the model was post-trained on. This can be left empty for base model inference. | *None* | | `--guidance` | A control mechanism that determines how closely the model follows specified conditions (prompt) during the generation process. We recommend starting with a guidance of 7 and increasing it later if necessary. | 7 | | `--sampler` | Sampling method used for generation. Only supports **RES** sampler from [this paper](https://arxiv.org/pdf/2308.02157). | `RES` | | `--video_save_path` | Location to save generated videos. | `Cosmos-1.0-Diffusion-7B-Video2World.mp4` | | `--fps` | Frames-per-second of generated video. Cosmos Diffusion models generate videos at 24 FPS by default. | 24 | | `--height` | Height of the generated video. Set to 704 pixels by default, which is the largest supported video height for Cosmos Diffusion. | 704 | | `--width` | Width of the generated video. Set to 1280 pixels by default, which is the largest supported video width for Cosmos Diffusion. | 1280 | | `--seed` | Random seed for generating initial noise sample. Changing this will create a different video for the same prompt. Keep the seed fixed to maintain deterministic video generations. | 1 | | `--num_devices` | [1–8] Number of GPUs to use in parallel for inference. | 8 | | `--cp_size` | [1–8] Number of context parallel ranks to spawn for parallelized inference. Must be equal to `num_devices`. | 8 | Generated videos are saved at the location configured in the `SAVE_PATH` parameter. > **Tip**: > For faster inference, you can remove the `--enable_prompt_upsampler` parameter, but this may degrade the generated result. > **Disclaimer**: > The post-training example in this documentation is a demonstration of general post-training and not a guaranteed recipe for success. Post-training outcomes depend heavily on the quality and diversity of the dataset. To achieve good results, ensure your dataset is clean, well-structured, diverse, and properly labeled. Poorly prepared data can lead to issues like overfitting, bias, or poor performance. Carefully curate your dataset to reflect the desired use case for reliable results.
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/diffusion/nemo/inference/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/inference/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 18975 }
# Cosmos Diffusion-based World Foundation Models: NeMo Framework User Guide Learn how to [post-train](#post-train) Cosmos Diffusion-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide. ## Model Support Matrix The NeMo Framework supports the following Cosmos Diffusion models. Review the available models and their compute requirements for post-tuning and inference to determine the best model for your use case. | Model Name | Model Status | Compute Requirements for Post-Training | |----------------------------------------------|------------------|------------------------------------------| | Cosmos-1.0-Diffusion-7B-Text2World | **Supported** | 8 NVIDIA GPUs* | | Cosmos-1.0-Diffusion-14B-Text2World | **Supported** | 8 NVIDIA GPUs* | | Cosmos-1.0-Diffusion-7B-Video2World | **Supported** | 8 NVIDIA GPUs* | | Cosmos-1.0-Diffusion-14B-Video2WorldB | **Supported** | 8 NVIDIA GPUs* | **\*** `H100-80GB` or `A100-80GB` GPUs are recommended. ## Post-Training Support Matrix Cosmos Diffusion-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks: | Post-training Task | Post-Training Support Status | |-------------------------|--------------------| | General post-training | **Supported** | | Instruction control | **Coming Soon** | | Action control | **Coming Soon** | | Camera control | **Coming Soon** | | Multi-view generation | **Coming Soon** | | Multi-view generation with vehicle trajectory control | **Coming Soon** | ## Prerequisites ### 1. Review General Requirements - System Configuration - **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix. - **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot). - Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference. - Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking. ### 2. Clone the Cosmos Repository ```bash git clone [email protected]:NVIDIA/Cosmos.git ``` ### 3. Start the Container The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Diffusion models. Run the following command to download and start the container: ```bash docker run --ipc=host -it --gpus=all \ -v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \ nvcr.io/nvidia/nemo:cosmos.1.0.1 bash ``` ### 4. Download Checkpoints To help you get started, we've provided a [download script](../download_diffusion_nemo.py) to get the Cosmos Diffusion Text2World and Video2World checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework. 1. Set the following environment variables: ```bash # You must set HF_HOME before running this script. export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" ``` 2. Run the following command to download the models: ```bash cd /workspace/Cosmos python cosmos1/models/diffusion/nemo/download_diffusion_nemo.py ``` ## Post-train Post-training a Cosmos Diffusion-based WFM enables you to train the model to generate videos that are more specific to your Physical AI use case. For example, if you want to generate action sequences for a specific robot, you can post-train the model to generate videos that are more aligned with typical actions/outcomes for that robot. There are 3 steps to post-training: preparing a dataset, preprocessing the data, and post-training the model. ### 1. Prepare a Dataset The first step is to prepare a dataset. Post-training a Cosmos-1.0-Diffusion-Text2World/Cosmos-1.0-Diffusion-Video2World model enables you to generate videos of a specific subject in new environments using a collection of input videos of that same subject as reference material. You must provide a folder containing a collection of videos in **MP4 format**, preferably 720p. These videos should focus on the subject throughout the entire video so that each video chunk contains the subject. Run the following command to download the sample videos used for post-training: ```bash huggingface-cli download nvidia/Cosmos-NeMo-Assets --repo-type dataset --local-dir cosmos1/models/diffusion/assets/ --include "*.mp4*" ``` ### 2. Preprocess Data for Single Subject Post-training The second step is to preprocess the input videos. This generates the post-training samples and the metadata required for the post-training process by: 1. Selecting `N` chunks of 121 frames from each video, generating `N` post-training samples per video. 2. Encoding the 121 frames by first independently compressing the first frame and then applying an 8x temporal compression for the rest of the frames. 3. Generating `total_samples = # of videos x # of chunks` post-training samples. Before proceeding, ensure all videos are in **RGB format**. Complete the following steps to generate the post-training samples and metadata for the robot dataset. Remember to follow the given prompt format by including the subject's name in the prompt. For example, if the subject is "robot," the prompt should read `"A video of sks robot."`. 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Path to Raw mp4 videos. export RAW_DATA="cosmos1/models/diffusion/assets/nemo_diffusion_example_data" # Path to Processed Dataset. export CACHED_DATA="./cached_data" && mkdir -p $CACHED_DATA ``` 2. Run the following command to preprocess the data: ```bash python cosmos1/models/diffusion/nemo/post_training/prepare_dataset.py \ --dataset_path $RAW_DATA \ --output_path $CACHED_DATA \ --prompt "A video of sks teal robot." \ --num_chunks 500 \ ``` Executing the [data preprocessing script](./prepare_dataset.py) generates the following files for each video (using `[i]` as the `index` of the video) at `$CACHED_DATA` path: - **`[i].info.json`**: Metadata for the video sample. - **`[i].t5_text_embeddings.pth`**: T5-generated text embedding for the video clip. - **`[i].t5_text_mask.pth`**: Mask for T5 text embedding, set to all ones by default to use the entire text embedding. - **`[i].video_latent.pth`**: 3D spatiotemporal video tokens generated from the video tokenizer. - **`[i].conditioning_latent.pth`**: 3D spatiotemporal video tokens generated from the video tokenizer on the first nine frames of the input video. These conditioning latents are only used during Video2World training. ### 3. Preprocess Data for Robot Instruction (or other Custom Prompt) Post-training Robot instruction post-training uses instructions as input prompts. Instructions are imperative prompts and correspond to the physical actions performed by the robot in a video. The instruction dataset processing workflow generalizes to any custom input prompt per video. 1. Create instruction dataset Create a dataset folder containing videos and per video instructions in the following format: ``` robot_dataset videos id1.mp4 id2.mp4 ... instructions id1.json id2.json ``` - **`robot_dataset/videos/id1.mp4`**: video clip - **`robot_dataset/instructions/id1.json`**: json file with key `language_instruction_0` mapping to a text instruction 2. Run the following command to preprocess the data: ```bash python cosmos1/models/diffusion/nemo/post_training/prepare_instruction_dataset.py \ --dataset_path robot_dataset \ --output_path robot_dataset/processed \ --num_chunks 500 ``` The output dataset is saved in `robot_dataset/processed/` in the same format described in the previous section. ### 3. Post-train the Model The third step is to post-train the model. This step uses NeMo Framework's data and model parallelism capabilities to train the model on the post-training samples. This is accomplished by using utilizing Fully Sharded Data Parallel (FSDP) and Tensor Parallelism. - **FSDP**: Distributes model parameters, optimizer states, and activations across all GPUs - **Tensor Parallelism**: Spreads the parameter tensor of individual layers across GPUs. > **NOTE**: > For the 14B model, we also employ activation checkpointing to facilitate single-node training. #### Run the Post-training Script Complete the following steps to post-train the Cosmos-1.0-Diffusion-7B-Text2World or Cosmos-1.0-Diffusion-7B-Video2World models on the robot dataset using 8 GPUs. ##### Text2World 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Optionally, you can monitor training progress with Weights and Biases (wandb). export WANDB_API_KEY="</your/wandb/api/key>" export WANDB_PROJECT_NAME="cosmos-diffusion-nemo-post-training" export WANDB_RUN_ID="cosmos_diffusion_7b_text2world_finetune" ``` 2. Run the following command for Cosmos-Diffusion-Text2World-7B general post-training: ```bash NVTE_FUSED_ATTN=0 \ CUDA_DEVICE_MAX_CONNECTIONS=1 \ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \ torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/post_training/general.py \ --yes \ --factory cosmos_diffusion_7b_text2world_finetune \ data.path=$CACHED_DATA \ trainer.max_steps=1000 \ optim.config.lr=1e-6 ``` ###### Configuration Options Before getting started, review the following parameters made available to the script. You can adjust these parameters to optimize performance based on your specific requirements. | Parameter | Description | Default | |--------------------------------|---------------------------------------------------------------------------------|---------| | `--factory` | recipe to use cosmos_diffusion_7b_text2world_finetune or cosmos_diffusion_14b_text2world_finetune for general post-training | cosmos_diffusion_7b_text2world_finetune | | `data.path` | Path to processed post-training dataset (str). | None | | `resume.restore_config.path` | Path to pre-trained Cosmos Diffusion NeMo distributed checkpoint (str). | None | | `optim.config.lr` | Learning rate (float). | 1e-6 | ##### Video2World 1. Set the following environment variables: ```bash export HF_TOKEN="<your/HF/access/token>" export HF_HOME="<path/to/store/checkpoints>" # Optionally, you can monitor training progress with Weights and Biases (wandb). export WANDB_API_KEY="</your/wandb/api/key>" export WANDB_PROJECT_NAME="cosmos-diffusion-nemo-post-training" export WANDB_RUN_ID="cosmos_diffusion_7b_video2world_finetune" ``` 2. Run the following command for Cosmos-Diffusion-Video2World-7B general post-training: ```bash NVTE_FUSED_ATTN=0 \ CUDA_DEVICE_MAX_CONNECTIONS=1 \ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \ torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/post_training/video2world.py \ --yes \ --factory cosmos_diffusion_7b_video2world_finetune \ data.path=$CACHED_DATA \ trainer.max_steps=1000 \ optim.config.lr=1e-6 You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model). ###### Configuration Options Before getting started, review the following parameters made available to the script. You can adjust these parameters to optimize performance based on your specific requirements. | Parameter | Description | Default | |--------------------------------|---------------------------------------------------------------------------------|---------| | `--factory` | recipe to use cosmos_diffusion_7b_video2world_finetune or cosmos_diffusion_14b_video2world_finetune for video2world post-training | cosmos_diffusion_7b_video2world_finetune | | `data.path` | Path to processed post-training dataset (str). | None | | `resume.restore_config.path` | Path to pre-trained Cosmos Diffusion NeMo distributed checkpoint (str). | None | | `optim.config.lr` | Learning rate (float). | 1e-6 | | `trainer.max_steps` | Max number of post-training steps (int). | 1000 | | `log.log_dir` | Path to folder to save post-training logs and checkpoints (str). | None |
{ "source": "NVIDIA/Cosmos", "title": "cosmos1/models/diffusion/nemo/post_training/README.md", "url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/post_training/README.md", "date": "2024-12-30T17:21:14", "stars": 7461, "description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.", "file_size": 13563 }
<div align="center"> <img align="left" width="100" height="100" src="https://github.com/user-attachments/assets/1834fc25-42ef-4237-9feb-53a01c137e83" alt=""> # SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory [Cheng-Yen Yang](https://yangchris11.github.io), [Hsiang-Wei Huang](https://hsiangwei0903.github.io/), [Wenhao Chai](https://rese1f.github.io/), [Zhongyu Jiang](https://zhyjiang.github.io/#/), [Jenq-Neng Hwang](https://people.ece.uw.edu/hwang/) [Information Processing Lab, University of Washington](https://ipl-uw.github.io/) </div> [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-lasot-ext)](https://paperswithcode.com/sota/visual-object-tracking-on-lasot-ext?p=samurai-adapting-segment-anything-model-for-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-got-10k)](https://paperswithcode.com/sota/visual-object-tracking-on-got-10k?p=samurai-adapting-segment-anything-model-for-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-needforspeed)](https://paperswithcode.com/sota/visual-object-tracking-on-needforspeed?p=samurai-adapting-segment-anything-model-for-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-lasot)](https://paperswithcode.com/sota/visual-object-tracking-on-lasot?p=samurai-adapting-segment-anything-model-for-1) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/samurai-adapting-segment-anything-model-for-1/visual-object-tracking-on-otb-2015)](https://paperswithcode.com/sota/visual-object-tracking-on-otb-2015?p=samurai-adapting-segment-anything-model-for-1) [[Arxiv]](https://arxiv.org/abs/2411.11922) [[Project Page]](https://yangchris11.github.io/samurai/) [[Raw Results]](https://drive.google.com/drive/folders/1ssiDmsC7mw5AiItYQG4poiR1JgRq305y?usp=sharing) This repository is the official implementation of SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory https://github.com/user-attachments/assets/9d368ca7-2e9b-4fed-9da0-d2efbf620d88 All rights are reserved to the copyright owners (TM & © Universal (2019)). This clip is not intended for commercial use and is solely for academic demonstration in a research paper. Original source can be found [here](https://www.youtube.com/watch?v=cwUzUzpG8aM&t=4s). ## News - [ ] **Incoming**: Support vot-challenge toolkit intergration. - [ ] **Incoming**: Release demo script to support inference on video (with mask prompt). - [x] **2025/01/27**: Release [inference script](https://github.com/yangchris11/samurai/blob/master/sam2/tools/README.md#samurai-vos-inference) on VOS task (SA-V)! - [x] **2024/11/21**: Release [demo script](https://github.com/yangchris11/samurai?tab=readme-ov-file#demo-on-custom-video) to support inference on video (bounding box prompt). - [x] **2024/11/20** Release [inference script](https://github.com/yangchris11/samurai?tab=readme-ov-file#main-inference) on VOT task (LaSOT, LaSOT-ext, GOT-10k, UAV123, TrackingNet, OTB100)! - [x] **2024/11/19**: Release [paper](https://arxiv.org/abs/2411.11922), [code](https://github.com/yangchris11/samurai), and [raw results](https://drive.google.com/drive/folders/1ssiDmsC7mw5AiItYQG4poiR1JgRq305y?usp=sharing)! ## Getting Started #### SAMURAI Installation SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://github.com/facebookresearch/sam2?tab=readme-ov-file) to install both PyTorch and TorchVision dependencies. You can install **the SAMURAI version** of SAM 2 on a GPU machine using: ``` cd sam2 pip install -e . pip install -e ".[notebooks]" ``` Please see [INSTALL.md](https://github.com/facebookresearch/sam2/blob/main/INSTALL.md) from the original SAM 2 repository for FAQs on potential issues and solutions. Install other requirements: ``` pip install matplotlib==3.7 tikzplotlib jpeg4py opencv-python lmdb pandas scipy loguru ``` #### SAM 2.1 Checkpoint Download ``` cd checkpoints && \ ./download_ckpts.sh && \ cd .. ``` #### Data Preparation Please prepare the data in the following format: ``` data/LaSOT ├── airplane/ │ ├── airplane-1/ │ │ ├── full_occlusion.txt │ │ ├── groundtruth.txt │ │ ├── img │ │ ├── nlp.txt │ │ └── out_of_view.txt │ ├── airplane-2/ │ ├── airplane-3/ │ ├── ... ├── basketball ├── bear ├── bicycle ... ├── training_set.txt └── testing_set.txt ``` #### Main Inference ``` python scripts/main_inference.py ``` ## Demo on Custom Video To run the demo with your custom video or frame directory, use the following examples: **Note:** The `.txt` file contains a single line with the bounding box of the first frame in `x,y,w,h` format while the SAM 2 takes `x1,y1,x2,y2` format as bbox input. ### Input is Video File ``` python scripts/demo.py --video_path <your_video.mp4> --txt_path <path_to_first_frame_bbox.txt> ``` ### Input is Frame Folder ``` # Only JPG images are supported python scripts/demo.py --video_path <your_frame_directory> --txt_path <path_to_first_frame_bbox.txt> ``` ## FAQs **Question 1:** Does SAMURAI need training? [issue 34](https://github.com/yangchris11/samurai/issues/34) **Answer 1:** Unlike real-life samurai, the proposed samurai do not require additional training. It is a zero-shot method, we directly use the weights from SAM 2.1 to conduct VOT experiments. The Kalman filter is used to estimate the current and future state (bounding box location and scale in our case) of a moving object based on measurements over time, it is a common approach that had been adopted in the field of tracking for a long time, which does not require any training. Please refer to code for more detail. **Question 2:** Does SAMURAI support streaming input (e.g. webcam)? **Answer 2:** Not yet. The existing code doesn't support live/streaming video as we inherit most of the codebase from the amazing SAM 2. Some discussion that you might be interested in: facebookresearch/sam2#90, facebookresearch/sam2#388 (comment). **Question 3:** How to use SAMURAI in longer video? **Answer 3:** See the discussion from sam2 https://github.com/facebookresearch/sam2/issues/264. **Question 4:** How do you run the evaluation on the VOT benchmarks? **Answer 4:** For LaSOT, LaSOT-ext, OTB, NFS please refer to the [issue 74](https://github.com/yangchris11/samurai/issues/74) for more details. For GOT-10k-test and TrackingNet, please refer to the official portal for submission. ## Acknowledgment SAMURAI is built on top of [SAM 2](https://github.com/facebookresearch/sam2?tab=readme-ov-file) by Meta FAIR. The VOT evaluation code is modifed from [VOT Toolkit](https://github.com/votchallenge/toolkit) by Luka Čehovin Zajc. ## Citation Please consider citing our paper and the wonderful `SAM 2` if you found our work interesting and useful. ``` @article{ravi2024sam2, title={SAM 2: Segment Anything in Images and Videos}, author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph}, journal={arXiv preprint arXiv:2408.00714}, url={https://arxiv.org/abs/2408.00714}, year={2024} } @misc{yang2024samurai, title={SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory}, author={Cheng-Yen Yang and Hsiang-Wei Huang and Wenhao Chai and Zhongyu Jiang and Jenq-Neng Hwang}, year={2024}, eprint={2411.11922}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2411.11922}, } ```
{ "source": "yangchris11/samurai", "title": "README.md", "url": "https://github.com/yangchris11/samurai/blob/master/README.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 8197 }
# Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. This Code of Conduct also applies outside the project spaces when there is a reasonable belief that an individual's behavior may have a negative impact on the project or its community. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at <[email protected]>. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq
{ "source": "yangchris11/samurai", "title": "sam2/CODE_OF_CONDUCT.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/CODE_OF_CONDUCT.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 3540 }
# Contributing to segment-anything We want to make contributing to this project as easy and transparent as possible. ## Pull Requests We actively welcome your pull requests. 1. Fork the repo and create your branch from `main`. 2. If you've added code that should be tested, add tests. 3. If you've changed APIs, update the documentation. 4. Ensure the test suite passes. 5. Make sure your code lints, using the `ufmt format` command. Linting requires `black==24.2.0`, `usort==1.0.2`, and `ufmt==2.0.0b2`, which can be installed via `pip install -e ".[dev]"`. 6. If you haven't already, complete the Contributor License Agreement ("CLA"). ## Contributor License Agreement ("CLA") In order to accept your pull request, we need you to submit a CLA. You only need to do this once to work on any of Facebook's open source projects. Complete your CLA here: <https://code.facebook.com/cla> ## Issues We use GitHub issues to track public bugs. Please ensure your description is clear and has sufficient instructions to be able to reproduce the issue. Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe disclosure of security bugs. In those cases, please go through the process outlined on that page and do not file a public issue. ## License By contributing to segment-anything, you agree that your contributions will be licensed under the LICENSE file in the root directory of this source tree.
{ "source": "yangchris11/samurai", "title": "sam2/CONTRIBUTING.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/CONTRIBUTING.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 1424 }
## Installation ### Requirements - Linux with Python ≥ 3.10, PyTorch ≥ 2.3.1 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. Install them together at https://pytorch.org to ensure this. * Note older versions of Python or PyTorch may also work. However, the versions above are strongly recommended to provide all features such as `torch.compile`. - [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. This should typically be CUDA 12.1 if you follow the default installation command. - If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu. Then, install SAM 2 from the root of this repository via ```bash pip install -e ".[notebooks]" ``` Note that you may skip building the SAM 2 CUDA extension during installation via environment variable `SAM2_BUILD_CUDA=0`, as follows: ```bash # skip the SAM 2 CUDA extension SAM2_BUILD_CUDA=0 pip install -e ".[notebooks]" ``` This would also skip the post-processing step at runtime (removing small holes and sprinkles in the output masks, which requires the CUDA extension), but shouldn't affect the results in most cases. ### Building the SAM 2 CUDA extension By default, we allow the installation to proceed even if the SAM 2 CUDA extension fails to build. (In this case, the build errors are hidden unless using `-v` for verbose output in `pip install`.) If you see a message like `Skipping the post-processing step due to the error above` at runtime or `Failed to build the SAM 2 CUDA extension due to the error above` during installation, it indicates that the SAM 2 CUDA extension failed to build in your environment. In this case, **you can still use SAM 2 for both image and video applications**. The post-processing step (removing small holes and sprinkles in the output masks) will be skipped, but this shouldn't affect the results in most cases. If you would like to enable this post-processing step, you can reinstall SAM 2 on a GPU machine with environment variable `SAM2_BUILD_ALLOW_ERRORS=0` to force building the CUDA extension (and raise errors if it fails to build), as follows ```bash pip uninstall -y SAM-2 && \ rm -f ./sam2/*.so && \ SAM2_BUILD_ALLOW_ERRORS=0 pip install -v -e ".[notebooks]" ``` Note that PyTorch needs to be installed first before building the SAM 2 CUDA extension. It's also necessary to install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. (This should typically be CUDA 12.1 if you follow the default installation command.) After installing the CUDA toolkits, you can check its version via `nvcc --version`. Please check the section below on common installation issues if the CUDA extension fails to build during installation or load at runtime. ### Common Installation Issues Click each issue for its solutions: <details> <summary> I got `ImportError: cannot import name '_C' from 'sam2'` </summary> <br/> This is usually because you haven't run the `pip install -e ".[notebooks]"` step above or the installation failed. Please install SAM 2 first, and see the other issues if your installation fails. In some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/sam2/issues/77. </details> <details> <summary> I got `MissingConfigException: Cannot find primary config 'configs/sam2.1/sam2.1_hiera_l.yaml'` </summary> <br/> This is usually because you haven't run the `pip install -e .` step above, so `sam2` isn't in your Python's `sys.path`. Please run this installation step. In case it still fails after the installation step, you may try manually adding the root of this repo to `PYTHONPATH` via ```bash export SAM2_REPO_ROOT=/path/to/sam2 # path to this repo export PYTHONPATH="${SAM2_REPO_ROOT}:${PYTHONPATH}" ``` to manually add `sam2_configs` into your Python's `sys.path`. </details> <details> <summary> I got `RuntimeError: Error(s) in loading state_dict for SAM2Base` when loading the new SAM 2.1 checkpoints </summary> <br/> This is likely because you have installed a previous version of this repo, which doesn't have the new modules to support the SAM 2.1 checkpoints yet. Please try the following steps: 1. pull the latest code from the `main` branch of this repo 2. run `pip uninstall -y SAM-2` to uninstall any previous installations 3. then install the latest repo again using `pip install -e ".[notebooks]"` In case the steps above still don't resolve the error, please try running in your Python environment the following ```python from sam2.modeling import sam2_base print(sam2_base.__file__) ``` and check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/sam2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation. </details> <details> <summary> My installation failed with `CUDA_HOME environment variable is not set` </summary> <br/> This usually happens because the installation step cannot find the CUDA toolkits (that contain the NVCC compiler) to build a custom CUDA kernel in SAM 2. Please install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) or the version that matches the CUDA version for your PyTorch installation. If the error persists after installing CUDA toolkits, you may explicitly specify `CUDA_HOME` via ``` export CUDA_HOME=/usr/local/cuda # change to your CUDA toolkit path ``` and rerun the installation. Also, you should make sure ``` python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)' ``` print `(True, a directory with cuda)` to verify that the CUDA toolkits are correctly set up. If you are still having problems after verifying that the CUDA toolkit is installed and the `CUDA_HOME` environment variable is set properly, you may have to add the `--no-build-isolation` flag to the pip command: ``` pip install --no-build-isolation -e . ``` </details> <details> <summary> I got `undefined symbol: _ZN3c1015SmallVectorBaseIjE8grow_podEPKvmm` (or similar errors) </summary> <br/> This usually happens because you have multiple versions of dependencies (PyTorch or CUDA) in your environment. During installation, the SAM 2 library is compiled against one version library while at run time it links against another version. This might be due to that you have different versions of PyTorch or CUDA installed separately via `pip` or `conda`. You may delete one of the duplicates to only keep a single PyTorch and CUDA version. In particular, if you have a lower PyTorch version than 2.3.1, it's recommended to upgrade to PyTorch 2.3.1 or higher first. Otherwise, the installation script will try to upgrade to the latest PyTorch using `pip`, which could sometimes lead to duplicated PyTorch installation if you have previously installed another PyTorch version using `conda`. We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/sam2/issues/22, https://github.com/facebookresearch/sam2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0. </details> <details> <summary> I got `CUDA error: no kernel image is available for execution on the device` </summary> <br/> A possible cause could be that the CUDA kernel is somehow not compiled towards your GPU's CUDA [capability](https://developer.nvidia.com/cuda-gpus). This could happen if the installation is done in an environment different from the runtime (e.g. in a slurm system). You can try pulling the latest code from the SAM 2 repo and running the following ``` export TORCH_CUDA_ARCH_LIST=9.0 8.0 8.6 8.9 7.0 7.2 7.5 6.0` ``` to manually specify the CUDA capability in the compilation target that matches your GPU. </details> <details> <summary> I got `RuntimeError: No available kernel. Aborting execution.` (or similar errors) </summary> <br/> This is probably because your machine doesn't have a GPU or a compatible PyTorch version for Flash Attention (see also https://discuss.pytorch.org/t/using-f-scaled-dot-product-attention-gives-the-error-runtimeerror-no-available-kernel-aborting-execution/180900 for a discussion in PyTorch forum). You may be able to resolve this error by replacing the line ```python OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings() ``` in [`sam2/modeling/sam/transformer.py`](sam2/modeling/sam/transformer.py) with ```python OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = True, True, True ``` to relax the attention kernel setting and use other kernels than Flash Attention. </details> <details> <summary> I got `Error compiling objects for extension` </summary> <br/> You may see error log of: > unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. This is probably because your versions of CUDA and Visual Studio are incompatible. (see also https://stackoverflow.com/questions/78515942/cuda-compatibility-with-visual-studio-2022-version-17-10 for a discussion in stackoverflow).<br> You may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/sam2/blob/main/setup.py). <br> After adding the argument, `get_extension()` will look like this: ```python def get_extensions(): srcs = ["sam2/csrc/connected_components.cu"] compile_args = { "cxx": [], "nvcc": [ "-DCUDA_HAS_FP16=1", "-D__CUDA_NO_HALF_OPERATORS__", "-D__CUDA_NO_HALF_CONVERSIONS__", "-D__CUDA_NO_HALF2_OPERATORS__", "-allow-unsupported-compiler" # Add this argument ], } ext_modules = [CUDAExtension("sam2._C", srcs, extra_compile_args=compile_args)] return ext_modules ``` </details>
{ "source": "yangchris11/samurai", "title": "sam2/INSTALL.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/INSTALL.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 10578 }
# SAM 2: Segment Anything in Images and Videos **[AI at Meta, FAIR](https://ai.meta.com/research/)** [Nikhila Ravi](https://nikhilaravi.com/), [Valentin Gabeur](https://gabeur.github.io/), [Yuan-Ting Hu](https://scholar.google.com/citations?user=E8DVVYQAAAAJ&hl=en), [Ronghang Hu](https://ronghanghu.com/), [Chaitanya Ryali](https://scholar.google.com/citations?user=4LWx24UAAAAJ&hl=en), [Tengyu Ma](https://scholar.google.com/citations?user=VeTSl0wAAAAJ&hl=en), [Haitham Khedr](https://hkhedr.com/), [Roman Rädle](https://scholar.google.de/citations?user=Tpt57v0AAAAJ&hl=en), [Chloe Rolland](https://scholar.google.com/citations?hl=fr&user=n-SnMhoAAAAJ), [Laura Gustafson](https://scholar.google.com/citations?user=c8IpF9gAAAAJ&hl=en), [Eric Mintun](https://ericmintun.github.io/), [Junting Pan](https://junting.github.io/), [Kalyan Vasudev Alwala](https://scholar.google.co.in/citations?user=m34oaWEAAAAJ&hl=en), [Nicolas Carion](https://www.nicolascarion.com/), [Chao-Yuan Wu](https://chaoyuan.org/), [Ross Girshick](https://www.rossgirshick.info/), [Piotr Dollár](https://pdollar.github.io/), [Christoph Feichtenhofer](https://feichtenhofer.github.io/) [[`Paper`](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/)] [[`Project`](https://ai.meta.com/sam2)] [[`Demo`](https://sam2.metademolab.com/)] [[`Dataset`](https://ai.meta.com/datasets/segment-anything-video)] [[`Blog`](https://ai.meta.com/blog/segment-anything-2)] [[`BibTeX`](#citing-sam-2)] ![SAM 2 architecture](assets/model_diagram.png?raw=true) **Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https://ai.meta.com/datasets/segment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains. ![SA-V dataset](assets/sa_v_dataset.jpg?raw=true) ## Latest updates **09/30/2024 -- SAM 2.1 Developer Suite (new checkpoints, training code, web demo) is released** - A new suite of improved model checkpoints (denoted as **SAM 2.1**) are released. See [Model Description](#model-description) for details. * To use the new SAM 2.1 checkpoints, you need the latest model code from this repo. If you have installed an earlier version of this repo, please first uninstall the previous version via `pip uninstall SAM-2`, pull the latest code from this repo (with `git pull`), and then reinstall the repo following [Installation](#installation) below. - The training (and fine-tuning) code has been released. See [`training/README.md`](training/README.md) on how to get started. - The frontend + backend code for the SAM 2 web demo has been released. See [`demo/README.md`](demo/README.md) for details. ## Installation SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using: ```bash git clone https://github.com/facebookresearch/sam2.git && cd sam2 pip install -e . ``` If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu. To use the SAM 2 predictor and run the example notebooks, `jupyter` and `matplotlib` are required and can be installed by: ```bash pip install -e ".[notebooks]" ``` Note: 1. It's recommended to create a new Python environment via [Anaconda](https://www.anaconda.com/) for this installation and install PyTorch 2.3.1 (or higher) via `pip` following https://pytorch.org/. If you have a PyTorch version lower than 2.3.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version using `pip`. 2. The step above requires compiling a custom CUDA kernel with the `nvcc` compiler. If it isn't already available on your machine, please install the [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) with a version that matches your PyTorch CUDA version. 3. If you see a message like `Failed to build the SAM 2 CUDA extension` during installation, you can ignore it and still use SAM 2 (some post-processing functionality may be limited, but it doesn't affect the results in most cases). Please see [`INSTALL.md`](./INSTALL.md) for FAQs on potential issues and solutions. ## Getting Started ### Download Checkpoints First, we need to download a model checkpoint. All the model checkpoints can be downloaded by running: ```bash cd checkpoints && \ ./download_ckpts.sh && \ cd .. ``` or individually from: - [sam2.1_hiera_tiny.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt) - [sam2.1_hiera_small.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt) - [sam2.1_hiera_base_plus.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt) - [sam2.1_hiera_large.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt) (note that these are the improved checkpoints denoted as SAM 2.1; see [Model Description](#model-description) for details.) Then SAM 2 can be used in a few lines as follows for image and video prediction. ### Image prediction SAM 2 has all the capabilities of [SAM](https://github.com/facebookresearch/segment-anything) on static images, and we provide image prediction APIs that closely resemble SAM for image use cases. The `SAM2ImagePredictor` class has an easy interface for image prompting. ```python import torch from sam2.build_sam import build_sam2 from sam2.sam2_image_predictor import SAM2ImagePredictor checkpoint = "./checkpoints/sam2.1_hiera_large.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml" predictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint)) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image(<your_image>) masks, _, _ = predictor.predict(<input_prompts>) ``` Please refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases. SAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images. ### Video prediction For promptable segmentation and tracking in videos, we provide a video predictor with APIs for example to add prompts and propagate masklets throughout a video. SAM 2 supports video inference on multiple objects and uses an inference state to keep track of the interactions in each video. ```python import torch from sam2.build_sam import build_sam2_video_predictor checkpoint = "./checkpoints/sam2.1_hiera_large.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml" predictor = build_sam2_video_predictor(model_cfg, checkpoint) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state(<your_video>) # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, <your_prompts>): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... ``` Please refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos. ## Load from 🤗 Hugging Face Alternatively, models can also be loaded from [Hugging Face](https://huggingface.co/models?search=facebook/sam2) (requires `pip install huggingface_hub`). For image prediction: ```python import torch from sam2.sam2_image_predictor import SAM2ImagePredictor predictor = SAM2ImagePredictor.from_pretrained("facebook/sam2-hiera-large") with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image(<your_image>) masks, _, _ = predictor.predict(<input_prompts>) ``` For video prediction: ```python import torch from sam2.sam2_video_predictor import SAM2VideoPredictor predictor = SAM2VideoPredictor.from_pretrained("facebook/sam2-hiera-large") with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state(<your_video>) # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, <your_prompts>): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... ``` ## Model Description ### SAM 2.1 checkpoints The table below shows the improved SAM 2.1 checkpoints released on September 29, 2024. | **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** | | :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: | | sam2.1_hiera_tiny <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt)) | 38.9 | 47.2 | 76.5 | 71.8 | 77.3 | | sam2.1_hiera_small <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\*) | 76.6 | 73.5 | 78.3 | | sam2.1_hiera_base_plus <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\*) | 78.2 | 73.7 | 78.2 | | sam2.1_hiera_large <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\*) | 79.5 | 74.6 | 80.6 | ### SAM 2 checkpoints The previous SAM 2 checkpoints released on July 29, 2024 can be found as follows: | **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** | | :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: | | sam2_hiera_tiny <br /> ([config](sam2/configs/sam2/sam2_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt)) | 38.9 | 47.2 | 75.0 | 70.9 | 75.3 | | sam2_hiera_small <br /> ([config](sam2/configs/sam2/sam2_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\*) | 74.9 | 71.5 | 76.4 | | sam2_hiera_base_plus <br /> ([config](sam2/configs/sam2/sam2_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\*) | 74.7 | 72.8 | 75.8 | | sam2_hiera_large <br /> ([config](sam2/configs/sam2/sam2_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\*) | 76.0 | 74.6 | 79.8 | \* Compile the model by setting `compile_image_encoder: True` in the config. ## Segment Anything Video Dataset See [sav_dataset/README.md](sav_dataset/README.md) for details. ## Training SAM 2 You can train or fine-tune SAM 2 on custom datasets of images, videos, or both. Please check the training [README](training/README.md) on how to get started. ## Web demo for SAM 2 We have released the frontend + backend code for the SAM 2 web demo (a locally deployable version similar to https://sam2.metademolab.com/demo). Please see the web demo [README](demo/README.md) for details. ## License The SAM 2 model checkpoints, SAM 2 demo code (front-end and back-end), and SAM 2 training code are licensed under [Apache 2.0](./LICENSE), however the [Inter Font](https://github.com/rsms/inter?tab=OFL-1.1-1-ov-file) and [Noto Color Emoji](https://github.com/googlefonts/noto-emoji) used in the SAM 2 demo code are made available under the [SIL Open Font License, version 1.1](https://openfontlicense.org/open-font-license-official-text/). ## Contributing See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md). ## Contributors The SAM 2 project was made possible with the help of many contributors (alphabetical): Karen Bergan, Daniel Bolya, Alex Bosenberg, Kai Brown, Vispi Cassod, Christopher Chedeau, Ida Cheng, Luc Dahlin, Shoubhik Debnath, Rene Martinez Doehner, Grant Gardner, Sahir Gomez, Rishi Godugu, Baishan Guo, Caleb Ho, Andrew Huang, Somya Jain, Bob Kamma, Amanda Kallet, Jake Kinney, Alexander Kirillov, Shiva Koduvayur, Devansh Kukreja, Robert Kuo, Aohan Lin, Parth Malani, Jitendra Malik, Mallika Malhotra, Miguel Martin, Alexander Miller, Sasha Mitts, William Ngan, George Orlin, Joelle Pineau, Kate Saenko, Rodrick Shepard, Azita Shokrpour, David Soofian, Jonathan Torres, Jenny Truong, Sagar Vaze, Meng Wang, Claudette Ward, Pengchuan Zhang. Third-party code: we use a GPU-based connected component algorithm adapted from [`cc_torch`](https://github.com/zsef123/Connected_components_PyTorch) (with its license in [`LICENSE_cctorch`](./LICENSE_cctorch)) as an optional post-processing step for the mask predictions. ## Citing SAM 2 If you use SAM 2 or the SA-V dataset in your research, please use the following BibTeX entry. ```bibtex @article{ravi2024sam2, title={SAM 2: Segment Anything in Images and Videos}, author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph}, journal={arXiv preprint arXiv:2408.00714}, url={https://arxiv.org/abs/2408.00714}, year={2024} } ```
{ "source": "yangchris11/samurai", "title": "sam2/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/README.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 15439 }
# SAM 2 Demo Welcome to the SAM 2 Demo! This project consists of a frontend built with React TypeScript and Vite and a backend service using Python Flask and Strawberry GraphQL. Both components can be run in Docker containers or locally on MPS (Metal Performance Shaders) or CPU. However, running the backend service on MPS or CPU devices may result in significantly slower performance (FPS). ## Prerequisites Before you begin, ensure you have the following installed on your system: - Docker and Docker Compose - [OPTIONAL] Node.js and Yarn for running frontend locally - [OPTIONAL] Anaconda for running backend locally ### Installing Docker To install Docker, follow these steps: 1. Go to the [Docker website](https://www.docker.com/get-started) 2. Follow the installation instructions for your operating system. ### [OPTIONAL] Installing Node.js and Yarn To install Node.js and Yarn, follow these steps: 1. Go to the [Node.js website](https://nodejs.org/en/download/). 2. Follow the installation instructions for your operating system. 3. Once Node.js is installed, open a terminal or command prompt and run the following command to install Yarn: ``` npm install -g yarn ``` ### [OPTIONAL] Installing Anaconda To install Anaconda, follow these steps: 1. Go to the [Anaconda website](https://www.anaconda.com/products/distribution). 2. Follow the installation instructions for your operating system. ## Quick Start To get both the frontend and backend running quickly using Docker, you can use the following command: ```bash docker compose up --build ``` > [!WARNING] > On macOS, Docker containers only support running on CPU. MPS is not supported through Docker. If you want to run the demo backend service on MPS, you will need to run it locally (see "Running the Backend Locally" below). This will build and start both services. You can access them at: - **Frontend:** [http://localhost:7262](http://localhost:7262) - **Backend:** [http://localhost:7263/graphql](http://localhost:7263/graphql) ## Running Backend with MPS Support MPS (Metal Performance Shaders) is not supported with Docker. To use MPS, you need to run the backend on your local machine. ### Setting Up Your Environment 1. **Create Conda environment** Create a new Conda environment for this project by running the following command or use your existing conda environment for SAM 2: ``` conda create --name sam2-demo python=3.10 --yes ``` This will create a new environment named `sam2-demo` with Python 3.10 as the interpreter. 2. **Activate the Conda environment:** ```bash conda activate sam2-demo ``` 3. **Install ffmpeg** ```bash conda install -c conda-forge ffmpeg ``` 4. **Install SAM 2 demo dependencies:** Install project dependencies by running the following command in the SAM 2 checkout root directory: ```bash pip install -e '.[interactive-demo]' ``` ### Running the Backend Locally Download the SAM 2 checkpoints: ```bash (cd ./checkpoints && ./download_ckpts.sh) ``` Use the following command to start the backend with MPS support: ```bash cd demo/backend/server/ ``` ```bash PYTORCH_ENABLE_MPS_FALLBACK=1 \ APP_ROOT="$(pwd)/../../../" \ APP_URL=http://localhost:7263 \ MODEL_SIZE=base_plus \ DATA_PATH="$(pwd)/../../data" \ DEFAULT_VIDEO_PATH=gallery/05_default_juggle.mp4 \ gunicorn \ --worker-class gthread app:app \ --workers 1 \ --threads 2 \ --bind 0.0.0.0:7263 \ --timeout 60 ``` Options for the `MODEL_SIZE` argument are "tiny", "small", "base_plus" (default), and "large". > [!WARNING] > Running the backend service on MPS devices can cause fatal crashes with the Gunicorn worker due to insufficient MPS memory. Try switching to CPU devices by setting the `SAM2_DEMO_FORCE_CPU_DEVICE=1` environment variable. ### Starting the Frontend If you wish to run the frontend separately (useful for development), follow these steps: 1. **Navigate to demo frontend directory:** ```bash cd demo/frontend ``` 2. **Install dependencies:** ```bash yarn install ``` 3. **Start the development server:** ```bash yarn dev --port 7262 ``` This will start the frontend development server on [http://localhost:7262](http://localhost:7262). ## Docker Tips - To rebuild the Docker containers (useful if you've made changes to the Dockerfile or dependencies): ```bash docker compose up --build ``` - To stop the Docker containers: ```bash docker compose down ``` ## Contributing Contributions are welcome! Please read our contributing guidelines to get started. ## License See the LICENSE file for details. --- By following these instructions, you should have a fully functional development environment for both the frontend and backend of the SAM 2 Demo. Happy coding!
{ "source": "yangchris11/samurai", "title": "sam2/demo/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/demo/README.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 4796 }
# Segment Anything Video (SA-V) Dataset ## Overview [Segment Anything Video (SA-V)](https://ai.meta.com/datasets/segment-anything-video/), consists of 51K diverse videos and 643K high-quality spatio-temporal segmentation masks (i.e., masklets). The dataset is released under the CC by 4.0 license. Browse the dataset [here](https://sam2.metademolab.com/dataset). ![SA-V dataset](../assets/sa_v_dataset.jpg?raw=true) ## Getting Started ### Download the dataset Visit [here](https://ai.meta.com/datasets/segment-anything-video-downloads/) to download SA-V including the training, val and test sets. ### Dataset Stats | | Num Videos | Num Masklets | | ---------- | ---------- | ----------------------------------------- | | SA-V train | 50,583 | 642,036 (auto 451,720 and manual 190,316) | | SA-V val | 155 | 293 | | SA-V test | 150 | 278 | ### Notebooks To load and visualize the SA-V training set annotations, refer to the example [sav_visualization_example.ipynb](./sav_visualization_example.ipynb) notebook. ### SA-V train For SA-V training set we release the mp4 videos and store the masklet annotations per video as json files . Automatic masklets and manual masklets are stored separately as two json files: `{video_id}_auto.json` and `{video_id}_manual.json`. They can be loaded as dictionaries in python in the format below. ``` { "video_id" : str; video id "video_duration" : float64; the duration in seconds of this video "video_frame_count" : float64; the number of frames in the video "video_height" : float64; the height of the video "video_width" : float64; the width of the video "video_resolution" : float64; video_height $\times$ video_width "video_environment" : List[str]; "Indoor" or "Outdoor" "video_split" : str; "train" for training set "masklet" : List[List[Dict]]; masklet annotations in list of list of RLEs. The outer list is over frames in the video and the inner list is over objects in the video. "masklet_id" : List[int]; the masklet ids "masklet_size_rel" : List[float]; the average mask area normalized by resolution across all the frames where the object is visible "masklet_size_abs" : List[float]; the average mask area (in pixels) across all the frames where the object is visible "masklet_size_bucket" : List[str]; "small": $1$ <= masklet_size_abs < $32^2$, "medium": $32^2$ <= masklet_size_abs < $96^2$, and "large": masklet_size_abs > $96^2$ "masklet_visibility_changes" : List[int]; the number of times where the visibility changes after the first appearance (e.g., invisible -> visible or visible -> invisible) "masklet_first_appeared_frame" : List[int]; the index of the frame where the object appears the first time in the video. Always 0 for auto masklets. "masklet_frame_count" : List[int]; the number of frames being annotated. Note that videos are annotated at 6 fps (annotated every 4 frames) while the videos are at 24 fps. "masklet_edited_frame_count" : List[int]; the number of frames being edited by human annotators. Always 0 for auto masklets. "masklet_type" : List[str]; "auto" or "manual" "masklet_stability_score" : Optional[List[List[float]]]; per-mask stability scores. Auto annotation only. "masklet_num" : int; the number of manual/auto masklets in the video } ``` Note that in SA-V train, there are in total 50,583 videos where all of them have manual annotations. Among the 50,583 videos there are 48,436 videos that also have automatic annotations. ### SA-V val and test For SA-V val and test sets, we release the extracted frames as jpeg files, and the masks as png files with the following directory structure: ``` sav_val(sav_test) ├── sav_val.txt (sav_test.txt): a list of video ids in the split ├── JPEGImages_24fps # videos are extracted at 24 fps │ ├── {video_id} │ │ ├── 00000.jpg # video frame │ │ ├── 00001.jpg # video frame │ │ ├── 00002.jpg # video frame │ │ ├── 00003.jpg # video frame │ │ └── ... │ ├── {video_id} │ ├── {video_id} │ └── ... └── Annotations_6fps # videos are annotated at 6 fps ├── {video_id} │ ├── 000 # obj 000 │ │ ├── 00000.png # mask for object 000 in 00000.jpg │ │ ├── 00004.png # mask for object 000 in 00004.jpg │ │ ├── 00008.png # mask for object 000 in 00008.jpg │ │ ├── 00012.png # mask for object 000 in 00012.jpg │ │ └── ... │ ├── 001 # obj 001 │ ├── 002 # obj 002 │ └── ... ├── {video_id} ├── {video_id} └── ... ``` All masklets in val and test sets are manually annotated in every frame by annotators. For each annotated object in a video, we store the annotated masks in a single png. This is because the annotated objects may overlap, e.g., it is possible in our SA-V dataset for there to be a mask for the whole person as well as a separate mask for their hands. ## SA-V Val and Test Evaluation We provide an evaluator to compute the common J and F metrics on SA-V val and test sets. To run the evaluation, we need to first install a few dependencies as follows: ``` pip install -r requirements.txt ``` Then we can evaluate the predictions as follows: ``` python sav_evaluator.py --gt_root {GT_ROOT} --pred_root {PRED_ROOT} ``` or run ``` python sav_evaluator.py --help ``` to print a complete help message. The evaluator expects the `GT_ROOT` to be one of the following folder structures, and `GT_ROOT` and `PRED_ROOT` to have the same structure. - Same as SA-V val and test directory structure ``` {GT_ROOT} # gt root folder ├── {video_id} │ ├── 000 # all masks associated with obj 000 │ │ ├── 00000.png # mask for object 000 in frame 00000 (binary mask) │ │ └── ... │ ├── 001 # all masks associated with obj 001 │ ├── 002 # all masks associated with obj 002 │ └── ... ├── {video_id} ├── {video_id} └── ... ``` In the paper for the experiments on SA-V val and test, we run inference on the 24 fps videos, and evaluate on the subset of frames where we have ground truth annotations (first and last annotated frames dropped). The evaluator will ignore the masks in frames where we don't have ground truth annotations. - Same as [DAVIS](https://github.com/davisvideochallenge/davis2017-evaluation) directory structure ``` {GT_ROOT} # gt root folder ├── {video_id} │ ├── 00000.png # annotations in frame 00000 (may contain multiple objects) │ └── ... ├── {video_id} ├── {video_id} └── ... ``` ## License The evaluation code is licensed under the [BSD 3 license](./LICENSE). Please refer to the paper for more details on the models. The videos and annotations in SA-V Dataset are released under CC BY 4.0. Third-party code: the evaluation software is heavily adapted from [`VOS-Benchmark`](https://github.com/hkchengrex/vos-benchmark) and [`DAVIS`](https://github.com/davisvideochallenge/davis2017-evaluation) (with their licenses in [`LICENSE_DAVIS`](./LICENSE_DAVIS) and [`LICENSE_VOS_BENCHMARK`](./LICENSE_VOS_BENCHMARK)).
{ "source": "yangchris11/samurai", "title": "sam2/sav_dataset/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/sav_dataset/README.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 8081 }
## SAM 2 toolkits This directory provides toolkits for additional SAM 2 use cases. ### Semi-supervised VOS inference The `vos_inference.py` script can be used to generate predictions for semi-supervised video object segmentation (VOS) evaluation on datasets such as [DAVIS](https://davischallenge.org/index.html), [MOSE](https://henghuiding.github.io/MOSE/) or the SA-V dataset. After installing SAM 2 and its dependencies, it can be used as follows ([DAVIS 2017 dataset](https://davischallenge.org/davis2017/code.html) as an example). This script saves the prediction PNG files to the `--output_mask_dir`. ```bash python ./tools/vos_inference.py \ --sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \ --sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \ --base_video_dir /path-to-davis-2017/JPEGImages/480p \ --input_mask_dir /path-to-davis-2017/Annotations/480p \ --video_list_file /path-to-davis-2017/ImageSets/2017/val.txt \ --output_mask_dir ./outputs/davis_2017_pred_pngs ``` (replace `/path-to-davis-2017` with the path to DAVIS 2017 dataset) To evaluate on the SA-V dataset with per-object PNG files for the object masks, we need to **add the `--per_obj_png_file` flag** as follows (using SA-V val as an example). This script will also save per-object PNG files for the output masks under the `--per_obj_png_file` flag. ```bash python ./tools/vos_inference.py \ --sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \ --sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \ --base_video_dir /path-to-sav-val/JPEGImages_24fps \ --input_mask_dir /path-to-sav-val/Annotations_6fps \ --video_list_file /path-to-sav-val/sav_val.txt \ --per_obj_png_file \ --output_mask_dir ./outputs/sav_val_pred_pngs ``` (replace `/path-to-sav-val` with the path to SA-V val) Then, we can use the evaluation tools or servers for each dataset to get the performance of the prediction PNG files above. Note: by default, the `vos_inference.py` script above assumes that all objects to track already appear on frame 0 in each video (as is the case in DAVIS, MOSE or SA-V). **For VOS datasets that don't have all objects to track appearing in the first frame (such as LVOS or YouTube-VOS), please add the `--track_object_appearing_later_in_video` flag when using `vos_inference.py`**. ### SAMURAI VOS inference ```bash python ./tools/vos_inference.py \ --sam2_cfg configs/samurai/sam2.1_hiera_l.yaml \ --sam2_checkpoint ./checkpoints/sam2.1_hiera_large.pt \ --base_video_dir /path-to-sav-val-or-sav-test/JPEGImages_24fps/ \ --input_mask_dir /path-to-sav-val-or-sav-test/Annotations_6fps \ --video_list_file /path-to-sav-val-or-sav-test/sav_val.txt \ # or sav_test.txt --per_obj_png_file \ --output_mask_dir /path-to-save-results/ \ --track_object_appearing_later_in_video ```
{ "source": "yangchris11/samurai", "title": "sam2/tools/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/tools/README.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 2808 }
# Training Code for SAM 2 This folder contains the training code for SAM 2, a foundation model for promptable visual segmentation in images and videos. The code allows users to train and fine-tune SAM 2 on their own datasets (image, video, or both). ## Structure The training code is organized into the following subfolders: * `dataset`: This folder contains image and video dataset and dataloader classes as well as their transforms. * `model`: This folder contains the main model class (`SAM2Train`) for training/fine-tuning. `SAM2Train` inherits from `SAM2Base` model and provides functions to enable training or fine-tuning SAM 2. It also accepts all training-time parameters used for simulating user prompts (e.g. iterative point sampling). * `utils`: This folder contains training utils such as loggers and distributed training utils. * `scripts`: This folder contains the script to extract the frames of SA-V dataset to be used in training. * `loss_fns.py`: This file has the main loss class (`MultiStepMultiMasksAndIous`) used for training. * `optimizer.py`: This file contains all optimizer utils that support arbitrary schedulers. * `trainer.py`: This file contains the `Trainer` class that accepts all the `Hydra` configurable modules (model, optimizer, datasets, etc..) and implements the main train/eval loop. * `train.py`: This script is used to launch training jobs. It supports single and multi-node jobs. For usage, please check the [Getting Started](README.md#getting-started) section or run `python training/train.py -h` ## Getting Started To get started with the training code, we provide a simple example to fine-tune our checkpoints on [MOSE](https://henghuiding.github.io/MOSE/) dataset, which can be extended to your custom datasets. #### Requirements: - We assume training on A100 GPUs with **80 GB** of memory. - Download the MOSE dataset using one of the provided links from [here](https://github.com/henghuiding/MOSE-api?tab=readme-ov-file#download). #### Steps to fine-tune on MOSE: - Install the packages required for training by running `pip install -e ".[dev]"`. - Set the paths for MOSE dataset in `configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml`. ```yaml dataset: # PATHS to Dataset img_folder: null # PATH to MOSE JPEGImages folder gt_folder: null # PATH to MOSE Annotations folder file_list_txt: null # Optional PATH to filelist containing a subset of videos to be used for training ``` - To fine-tune the base model on MOSE using 8 GPUs, run ```python python training/train.py \ -c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \ --use-cluster 0 \ --num-gpus 8 ``` We also support multi-node training on a cluster using [SLURM](https://slurm.schedmd.com/documentation.html), for example, you can train on 2 nodes by running ```python python training/train.py \ -c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \ --use-cluster 1 \ --num-gpus 8 \ --num-nodes 2 --partition $PARTITION \ --qos $QOS \ --account $ACCOUNT ``` where partition, qos, and account are optional and depend on your SLURM configuration. By default, the checkpoint and logs will be saved under `sam2_logs` directory in the root of the repo. Alternatively, you can set the experiment log directory in the config file as follows: ```yaml experiment_log_dir: null # Path to log directory, defaults to ./sam2_logs/${config_name} ``` The training losses can be monitored using `tensorboard` logs stored under `tensorboard/` in the experiment log directory. We also provide a sample validation [split]( ../training/assets/MOSE_sample_val_list.txt) for evaluation purposes. To generate predictions, follow this [guide](../tools/README.md) on how to use our `vos_inference.py` script. After generating the predictions, you can run the `sav_evaluator.py` as detailed [here](../sav_dataset/README.md#sa-v-val-and-test-evaluation). The expected MOSE J&F after fine-tuning the Base plus model is 79.4. After training/fine-tuning, you can then use the new checkpoint (saved in `checkpoints/` in the experiment log directory) similar to SAM 2 released checkpoints (as illustrated [here](../README.md#image-prediction)). ## Training on images and videos The code supports training on images and videos (similar to how SAM 2 is trained). We provide classes for loading SA-1B as a sample image dataset, SA-V as a sample video dataset, as well as any DAVIS-style video dataset (e.g. MOSE). Note that to train on SA-V, you must first extract all videos to JPEG frames using the provided extraction [script](./scripts/sav_frame_extraction_submitit.py). Below is an example of how to setup the datasets in your config to train on a mix of image and video datasets: ```yaml data: train: _target_: training.dataset.sam2_datasets.TorchTrainMixedDataset phases_per_epoch: ${phases_per_epoch} # Chunks a single epoch into smaller phases batch_sizes: # List of batch sizes corresponding to each dataset - ${bs1} # Batch size of dataset 1 - ${bs2} # Batch size of dataset 2 datasets: # SA1B as an example of an image dataset - _target_: training.dataset.vos_dataset.VOSDataset training: true video_dataset: _target_: training.dataset.vos_raw_dataset.SA1BRawDataset img_folder: ${path_to_img_folder} gt_folder: ${path_to_gt_folder} file_list_txt: ${path_to_train_filelist} # Optional sampler: _target_: training.dataset.vos_sampler.RandomUniformSampler num_frames: 1 max_num_objects: ${max_num_objects_per_image} transforms: ${image_transforms} # SA-V as an example of a video dataset - _target_: training.dataset.vos_dataset.VOSDataset training: true video_dataset: _target_: training.dataset.vos_raw_dataset.JSONRawDataset img_folder: ${path_to_img_folder} gt_folder: ${path_to_gt_folder} file_list_txt: ${path_to_train_filelist} # Optional ann_every: 4 sampler: _target_: training.dataset.vos_sampler.RandomUniformSampler num_frames: 8 # Number of frames per video max_num_objects: ${max_num_objects_per_video} reverse_time_prob: ${reverse_time_prob} # probability to reverse video transforms: ${video_transforms} shuffle: True num_workers: ${num_train_workers} pin_memory: True drop_last: True collate_fn: _target_: training.utils.data_utils.collate_fn _partial_: true dict_key: all ```
{ "source": "yangchris11/samurai", "title": "sam2/training/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/sam2/training/README.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 6666 }
# README ## Description for different text files GOT10K - got10k_train_full_split.txt: the complete GOT-10K training set. (9335 videos) - got10k_train_split.txt: part of videos from the GOT-10K training set - got10k_val_split.txt: another part of videos from the GOT-10K training set - got10k_vot_exclude.txt: 1k videos that are forbidden from "using to train models then testing on VOT" (as required by [VOT Challenge](https://www.votchallenge.net/vot2020/participation.html)) - got10k_vot_train_split.txt: part of videos from the "VOT-permitted" GOT-10K training set - got10k_vot_val_split.txt: another part of videos from the "VOT-permitted" GOT-10K training set LaSOT - lasot_train_split.txt: the complete LaSOT training set TrackingNnet - trackingnet_classmap.txt: The map from the sequence name to the target class for the TrackingNet
{ "source": "yangchris11/samurai", "title": "lib/train/data_specs/README.md", "url": "https://github.com/yangchris11/samurai/blob/master/lib/train/data_specs/README.md", "date": "2024-11-06T22:46:05", "stars": 6475, "description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"", "file_size": 843 }
<div align="center"> <h1>s1: Simple test-time scaling</h1> <p>Minimal recipe for test-time scaling and strong reasoning performance matching o1-preview with just 1,000 examples & budget forcing </p> </div> <br> ![](visuals/scaling.png) **************************************************************** **Updates:** * 2025-02: We released [s1.1](https://huggingface.co/simplescaling/s1.1-32B) a better model than s1 by reusing the same s1K questions but with reasoning traces generated by r1 instead of Gemini: [s1K-1.1](https://huggingface.co/datasets/simplescaling/s1K-1.1). Check [this tweet](https://x.com/Muennighoff/status/1889310803746246694) for details * 2025-01: We released [our paper](https://arxiv.org/abs/2501.19393) announced via [this tweet](https://x.com/Muennighoff/status/1886405528777073134). **************************************************************** This repository provides an overview of all resources for the paper ["s1: Simple test-time scaling"](https://arxiv.org/abs/2501.19393). - [Artifacts](#artifacts) - [Structure](#structure) - [Inference](#inference) - [vLLM](#vllm) - [vLLM with budget forcing](#vllm-with-budget-forcing) - [transformers](#transformers) - [Training](#training) - [Evaluation](#evaluation) - [Data](#data) - [Visuals](#visuals) - [Known Issues](#known-issues) - [Citation](#citation) ### Artifacts - **Paper**: https://arxiv.org/abs/2501.19393 - **Model**: https://hf.co/simplescaling/s1-32B - **Data**: https://hf.co/datasets/simplescaling/s1K - s1-prob: https://hf.co/datasets/simplescaling/s1-prob - s1-teasers: https://hf.co/datasets/simplescaling/s1-teasers - Full 59K: https://hf.co/datasets/simplescaling/data_ablation_full59K ### Structure - `eval/`: Evaluation scripts - `data/`: Synthetic data creation scripts & co - `train/`: Training scripts ### Inference #### vLLM Install the `vllm` library and run: ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model = LLM( "simplescaling/s1.1-32B", tensor_parallel_size=2, ) tok = AutoTokenizer.from_pretrained("simplescaling/s1-32B") stop_token_ids = tok("<|im_end|>")["input_ids"] sampling_params = SamplingParams( max_tokens=32768, min_tokens=0, stop_token_ids=stop_token_ids, ) prompt = "How many r in raspberry" prompt = "<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n" + prompt + "<|im_end|>\n<|im_start|>assistant\n" o = model.generate(prompt, sampling_params=sampling_params) print(o[0].outputs[0].text) ``` #### vLLM with budget forcing ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer # Decide on a token limit for thinking; As the model's max tokens is 32768, 32000 usually ensures there is enough space for the model to still answer MAX_TOKENS_THINKING = 32000 # Decide how often to ignore end-of-thinking token NUM_IGNORE = 1 model = LLM( "simplescaling/s1-32B", # s1 originally gets this prompt wrong but with budget forcing it fixes it tensor_parallel_size=2, ) tok = AutoTokenizer.from_pretrained( "simplescaling/s1-32B" ) stop_token_ids = tok("<|im_end|>")["input_ids"] sampling_params = SamplingParams( max_tokens=32768, min_tokens=0, stop_token_ids=stop_token_ids, skip_special_tokens=False, temperature=0.0, ) # For the exact raspberry sample in the paper see prompts = [ "How many r in raspberry", ] for i, p in enumerate(prompts): prompt = "<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n" + p + "<|im_end|>\n<|im_start|>assistant\n" stop_token_ids = tok("<|im_start|><|im_end|>")["input_ids"] sampling_params = SamplingParams( max_tokens=MAX_TOKENS_THINKING, min_tokens=0, stop_token_ids=stop_token_ids, skip_special_tokens=False, temperature=0.0, ) prompt += "<|im_start|>think" o = model.generate( prompt, sampling_params=sampling_params ) ignore_str = "Wait" max_tokens_thinking_tmp = MAX_TOKENS_THINKING if max_tokens_thinking_tmp > 0: for i in range(NUM_IGNORE): # Num of times to skip stop token max_tokens_thinking_tmp -= len(o[0].outputs[0].token_ids) prompt += o[0].outputs[0].text + ignore_str sampling_params = SamplingParams( max_tokens=max_tokens_thinking_tmp, min_tokens=1, stop_token_ids=stop_token_ids, skip_special_tokens=False, temperature=0.0, ) o = model.generate( prompt, sampling_params=sampling_params ) ### Final answer ### prompt += o[0].outputs[0].text # You can also append "Final Answer:" here like we do for some evaluations to prevent the model from just continuing to reason in its answer when early exiting stop_token_ids = tok("<|im_end|>")["input_ids"] sampling_params = SamplingParams( max_tokens=32768, min_tokens=0, stop_token_ids=stop_token_ids, skip_special_tokens=False, temperature=0.0, ) o = model.generate( prompt, sampling_params=sampling_params, ) print("With budget forcing:") # You will see that after the "Wait" in the reasoning trace it fixes its answer print(prompt + o[0].outputs[0].text) ``` #### transformers Install the `transformers` & `torch` libraries and run: ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch DEVICE = "cuda" if torch.cuda.is_available() else "cpu" model_name = "simplescaling/s1.1-32B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "How many r in raspberry" messages = [ {"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` ### Training To run training, you can find our script at `train/sft.py` which you can invoke via one of the `train/sft*sh` scripts which in turn you can launch via `train/launch.sh` if you are on a SLURM cluster (requires editing the file for your cluster setup). To train s1-32B/s1.1-32B, we recommend 16 H100 GPUs i.e. 2 nodes with 8 each. For s1.1, we set the block size to 20000 to avoid OOM (https://github.com/simplescaling/s1/blob/0ad4b3de32507b4aa0d4be28f336276ee99b2315/train/sft.sh#L17); Check the wandb logs [here](https://wandb.ai/hashimoto-group/o1/runs/m1ilia77/overview). Quick start: ``` git clone https://github.com/simplescaling/s1.git cd s1 pip3 install -r requirements.txt bash train/sft.sh ``` *Note: If you encounter an out-of-memory (OOM) issue with 8 GPUs, consider enabling gradient checkpointing by adding the following line to your script: `--gradient_checkpointing=True`.* ### Evaluation We cloned [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) at commit `4cec66e4e468d15789473d6d63c3a61a751fa524` and modified it. Setup: ```bash cd eval/lm-evaluation-harness pip install -e .[math,vllm] ``` All commands are in `eval/commands.sh`. For AIME24 we always pick the `aime24_nofigures` result, which uses a dataset that only contains the AIME24 figures if they are important for the task. If you want to compute statistics (avg thinking tokens etc) for an evaluation run you can use `python eval/compute_sample_stats.py path_to_samples_file.jsonl` All our evaluation result files are at: https://hf.co/datasets/simplescaling/results To run REBASE: commands are in `eval/rebase/run.sh` Note that for the evaluations in the Discussion section with REBASE we used https://huggingface.co/simplescaling/step-conditional-control-old trained on an older version of our dataset https://huggingface.co/datasets/simplescaling/s1K-step-conditional-control-old and run on an older version of our evaluation using https://huggingface.co/datasets/Maxwell-Jia/AIME_2024. ### Data To recreate s1K follow the steps below. In various files you will have to rename the organizations `simplescaling` and `qfq` with an organization that you own. **Note that [s1K-1.1](https://huggingface.co/datasets/simplescaling/s1K-1.1) is a better dataset generated with r1 traces instead of Gemini traces.** 1. Run `data/collect_data.py` followed by `data/fix_gpqa.py` & `data/add_aime.py` to collect the questions; Make sure to change the hub path in the respective files to one of your own 3. Generate traces with Gemini via `python data/gemini.py`. 4. Generate answers with Qwen via `python data/bulk_inference.py` that can be launched with `data/bulk_inference.sh`. 5. Add features by running `python data/featurization.py`. 6. Run final filtering via going through `data/filter.ipynb`. ### Visuals All figures and some tables are created via [this colab](https://colab.research.google.com/drive/1GAfwbJs2Y1dgGGsxrQyQg2G7CRH5NgN3?usp=sharing) equivalent to `visuals/visuals.ipynb`. Some are subsequently edited via the `visuals/s1.fig` file, which you can load in Figma. ### Known Issues - vLLM throws `ValueError: Token id XXXXX is out of vocabulary` - This can happen with budget forcing, especially when running with temperature 1, where the model will sometimes do crazy stuff and predict a vocab id that is larger than its max token id but still within its embedding size i.e. anything <152064, >151664; When we refeed the model's previous outputs to it which is done when setting e.g. max_thinking_tokens in the evaluation then this will cause the error cuz vLLM does this check even though it would only be an issue for IDs >152064. To fix it you can just uncomment the vLLM ValueError (It is the line `if max_input_id > tokenizer.max_token_id:` in `vllm/engine/llm_engine.py`) ### Citation ```bibtex @misc{muennighoff2025s1simpletesttimescaling, title={s1: Simple test-time scaling}, author={Niklas Muennighoff and Zitong Yang and Weijia Shi and Xiang Lisa Li and Li Fei-Fei and Hannaneh Hajishirzi and Luke Zettlemoyer and Percy Liang and Emmanuel Candès and Tatsunori Hashimoto}, year={2025}, eprint={2501.19393}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.19393}, } ```
{ "source": "simplescaling/s1", "title": "README.md", "url": "https://github.com/simplescaling/s1/blob/main/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 10902 }
MIT License Copyright (c) 2020 EleutherAI Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/LICENSE.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/LICENSE.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1066 }
# Language Model Evaluation Harness [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10256836.svg)](https://doi.org/10.5281/zenodo.10256836) --- *Latest News 📣* - [2024/09] We are prototyping allowing users of LM Evaluation Harness to create and evaluate on text+image multimodal input, text output tasks, and have just added the `hf-multimodal` and `vllm-vlm` model types and `mmmu` task as a prototype feature. We welcome users to try out this in-progress feature and stress-test it for themselves, and suggest they check out [`lmms-eval`](https://github.com/EvolvingLMMs-Lab/lmms-eval), a wonderful project originally forking off of the lm-evaluation-harness, for a broader range of multimodal tasks, models, and features. - [2024/07] [API model](docs/API_guide.md) support has been updated and refactored, introducing support for batched and async requests, and making it significantly easier to customize and use for your own purposes. **To run Llama 405B, we recommend using VLLM's OpenAI-compliant API to host the model, and use the `local-completions` model type to evaluate the model.** - [2024/07] New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/README.md) task group. --- ## Announcement **A new v0.4.0 release of lm-evaluation-harness is available** ! New updates and features include: - **New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/README.md) task group.** - Internal refactoring - Config-based task creation and configuration - Easier import and sharing of externally-defined task config YAMLs - Support for Jinja2 prompt design, easy modification of prompts + prompt imports from Promptsource - More advanced configuration options, including output post-processing, answer extraction, and multiple LM generations per document, configurable fewshot settings, and more - Speedups and new modeling libraries supported, including: faster data-parallel HF model usage, vLLM support, MPS support with HuggingFace, and more - Logging and usability changes - New tasks including CoT BIG-Bench-Hard, Belebele, user-defined task groupings, and more Please see our updated documentation pages in `docs/` for more details. Development will be continuing on the `main` branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub, or in the [EleutherAI discord](https://discord.gg/eleutherai)! --- ## Overview This project provides a unified framework to test generative language models on a large number of different evaluation tasks. **Features:** - Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented. - Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface. - Support for fast and memory-efficient inference with [vLLM](https://github.com/vllm-project/vllm). - Support for commercial APIs including [OpenAI](https://openai.com), and [TextSynth](https://textsynth.com/). - Support for evaluation on adapters (e.g. LoRA) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft). - Support for local models and benchmarks. - Evaluation with publicly available prompts ensures reproducibility and comparability between papers. - Easy support for custom prompts and evaluation metrics. The Language Model Evaluation Harness is the backend for 🤗 Hugging Face's popular [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), has been used in [hundreds of papers](https://scholar.google.com/scholar?oi=bibs&hl=en&authuser=2&cites=15052937328817631261,4097184744846514103,1520777361382155671,17476825572045927382,18443729326628441434,14801318227356878622,7890865700763267262,12854182577605049984,15641002901115500560,5104500764547628290), and is used internally by dozens of organizations including NVIDIA, Cohere, BigScience, BigCode, Nous Research, and Mosaic ML. ## Install To install the `lm-eval` package from the github repository, run: ```bash git clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` We also provide a number of optional dependencies for extended functionality. A detailed table is available at the end of this document. ## Basic Usage ### User Guide A user guide detailing the full list of supported arguments is provided [here](./docs/interface.md), and on the terminal by calling `lm_eval -h`. Alternatively, you can use `lm-eval` instead of `lm_eval`. A list of supported tasks (or groupings of tasks) can be viewed with `lm-eval --tasks list`. Task descriptions and links to corresponding subfolders are provided [here](./lm_eval/tasks/README.md). ### Hugging Face `transformers` To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command (this assumes you are using a CUDA-compatible GPU): ```bash lm_eval --model hf \ --model_args pretrained=EleutherAI/gpt-j-6B \ --tasks hellaswag \ --device cuda:0 \ --batch_size 8 ``` Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model: ```bash lm_eval --model hf \ --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \ --tasks lambada_openai,hellaswag \ --device cuda:0 \ --batch_size 8 ``` Models that are loaded via both `transformers.AutoModelForCausalLM` (autoregressive, decoder-only GPT style models) and `transformers.AutoModelForSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface are supported. Batch size selection can be automated by setting the ```--batch_size``` flag to ```auto```. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append ```:N``` to above flag to automatically recompute the largest batch size ```N``` times. For example, to recompute the batch size 4 times, the command would be: ```bash lm_eval --model hf \ --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \ --tasks lambada_openai,hellaswag \ --device cuda:0 \ --batch_size auto:4 ``` > [!Note] > Just like you can provide a local path to `transformers.AutoModel`, you can also provide a local path to `lm_eval` via `--model_args pretrained=/path/to/model` #### Multi-GPU Evaluation with Hugging Face `accelerate` We support three main ways of using Hugging Face's [accelerate 🚀](https://github.com/huggingface/accelerate) library for multi-GPU evaluation. To perform *data-parallel evaluation* (where each GPU loads a **separate full copy** of the model), we leverage the `accelerate` launcher as follows: ``` accelerate launch -m lm_eval --model hf \ --tasks lambada_openai,arc_easy \ --batch_size 16 ``` (or via `accelerate launch --no-python lm_eval`). For cases where your model can fit on a single GPU, this allows you to evaluate on K GPUs K times faster than on one. **WARNING**: This setup does not work with FSDP model sharding, so in `accelerate config` FSDP must be disabled, or the NO_SHARD FSDP option must be used. The second way of using `accelerate` for multi-GPU evaluation is when your model is *too large to fit on a single GPU.* In this setting, run the library *outside the `accelerate` launcher*, but passing `parallelize=True` to `--model_args` as follows: ``` lm_eval --model hf \ --tasks lambada_openai,arc_easy \ --model_args parallelize=True \ --batch_size 16 ``` This means that your model's weights will be split across all available GPUs. For more advanced users or even larger models, we allow for the following arguments when `parallelize=True` as well: - `device_map_option`: How to split model weights across available GPUs. defaults to "auto". - `max_memory_per_gpu`: the max GPU memory to use per GPU in loading the model. - `max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to RAM. - `offload_folder`: a folder where model weights will be offloaded to disk if needed. The third option is to use both at the same time. This will allow you to take advantage of both data parallelism and model sharding, and is especially useful for models that are too large to fit on a single GPU. ``` accelerate launch --multi_gpu --num_processes {nb_of_copies_of_your_model} \ -m lm_eval --model hf \ --tasks lambada_openai,arc_easy \ --model_args parallelize=True \ --batch_size 16 ``` To learn more about model parallelism and how to use it with the `accelerate` library, see the [accelerate documentation](https://huggingface.co/docs/transformers/v4.15.0/en/parallelism) **Warning: We do not natively support multi-node evaluation using the `hf` model type! Please reference [our GPT-NeoX library integration](https://github.com/EleutherAI/gpt-neox/blob/main/eval.py) for an example of code in which a custom multi-machine evaluation script is written.** **Note: we do not currently support multi-node evaluations natively, and advise using either an externally hosted server to run inference requests against, or creating a custom integration with your distributed framework [as is done for the GPT-NeoX library](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py).** ### NVIDIA `nemo` models [NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo) is a generative AI framework built for researchers and pytorch developers working on language models. To evaluate a `nemo` model, start by installing NeMo following [the documentation](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#installation). We highly recommended to use the NVIDIA PyTorch or NeMo container, especially if having issues installing Apex or any other dependencies (see [latest released containers](https://github.com/NVIDIA/NeMo/releases)). Please also install the lm evaluation harness library following the instructions in [the Install section](https://github.com/EleutherAI/lm-evaluation-harness/tree/main?tab=readme-ov-file#install). NeMo models can be obtained through [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/models) or in [NVIDIA's Hugging Face page](https://huggingface.co/nvidia). In [NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo/tree/main/scripts/nlp_language_modeling) there are conversion scripts to convert the `hf` checkpoints of popular models like llama, falcon, mixtral or mpt to `nemo`. Run a `nemo` model on one GPU: ```bash lm_eval --model nemo_lm \ --model_args path=<path_to_nemo_model> \ --tasks hellaswag \ --batch_size 32 ``` It is recommended to unpack the `nemo` model to avoid the unpacking inside the docker container - it may overflow disk space. For that you can run: ``` mkdir MY_MODEL tar -xvf MY_MODEL.nemo -c MY_MODEL ``` #### Multi-GPU evaluation with NVIDIA `nemo` models By default, only one GPU is used. But we do support either data replication or tensor/pipeline parallelism during evaluation, on one node. 1) To enable data replication, set the `model_args` of `devices` to the number of data replicas to run. For example, the command to run 8 data replicas over 8 GPUs is: ```bash torchrun --nproc-per-node=8 --no-python lm_eval \ --model nemo_lm \ --model_args path=<path_to_nemo_model>,devices=8 \ --tasks hellaswag \ --batch_size 32 ``` 2) To enable tensor and/or pipeline parallelism, set the `model_args` of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. In addition, you also have to set up `devices` to be equal to the product of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. For example, the command to use one node of 4 GPUs with tensor parallelism of 2 and pipeline parallelism of 2 is: ```bash torchrun --nproc-per-node=4 --no-python lm_eval \ --model nemo_lm \ --model_args path=<path_to_nemo_model>,devices=4,tensor_model_parallel_size=2,pipeline_model_parallel_size=2 \ --tasks hellaswag \ --batch_size 32 ``` Note that it is recommended to substitute the `python` command by `torchrun --nproc-per-node=<number of devices> --no-python` to facilitate loading the model into the GPUs. This is especially important for large checkpoints loaded into multiple GPUs. Not supported yet: multi-node evaluation and combinations of data replication with tensor or pipeline parallelism. ### Tensor + Data Parallel and Optimized Inference with `vLLM` We also support vLLM for faster inference on [supported model types](https://docs.vllm.ai/en/latest/models/supported_models.html), especially faster when splitting a model across multiple GPUs. For single-GPU or multi-GPU — tensor parallel, data parallel, or a combination of both — inference, for example: ```bash lm_eval --model vllm \ --model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \ --tasks lambada_openai \ --batch_size auto ``` To use vllm, do `pip install lm_eval[vllm]`. For a full list of supported vLLM configurations, please reference our [vLLM integration](https://github.com/EleutherAI/lm-evaluation-harness/blob/e74ec966556253fbe3d8ecba9de675c77c075bce/lm_eval/models/vllm_causallms.py) and the vLLM documentation. vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a [script](./scripts/model_comparator.py) for checking the validity of vllm results against HF. > [!Tip] > For fastest performance, we recommend using `--batch_size auto` for vLLM whenever possible, to leverage its continuous batching functionality! > [!Tip] > Passing `max_model_len=4096` or some other reasonable default to vLLM through model args may cause speedups or prevent out-of-memory errors when trying to use auto batch size, such as for Mistral-7B-v0.1 which defaults to a maximum length of 32k. ### Model APIs and Inference Servers Our library also supports the evaluation of models served via several commercial APIs, and we hope to implement support for the most commonly used performant local/self-hosted inference servers. To call a hosted model, use: ```bash export OPENAI_API_KEY=YOUR_KEY_HERE lm_eval --model openai-completions \ --model_args model=davinci \ --tasks lambada_openai,hellaswag ``` We also support using your own local inference server with servers that mirror the OpenAI Completions and ChatCompletions APIs. ```bash lm_eval --model local-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1/completions,num_concurrent=1,max_retries=3,tokenized_requests=False,batch_size=16 ``` Note that for externally hosted models, configs such as `--device` which relate to where to place a local model should not be used and do not function. Just like you can use `--model_args` to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model API for hosted models. See the documentation of the hosting service for information on what arguments they support. | API or Inference Server | Implemented? | `--model <xxx>` name | Models supported: | Request Types: | |---------------------------------------------------------------------------------------------------------------------------|---------------------------------|-----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------| | OpenAI Completions | :heavy_check_mark: | `openai-completions`, `local-completions` | All OpenAI Completions API models | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | OpenAI ChatCompletions | :heavy_check_mark: | `openai-chat-completions`, `local-chat-completions` | [All ChatCompletions API models](https://platform.openai.com/docs/guides/gpt) | `generate_until` (no logprobs) | | Anthropic | :heavy_check_mark: | `anthropic` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/reference/selecting-a-model) | `generate_until` (no logprobs) | | Anthropic Chat | :heavy_check_mark: | `anthropic-chat`, `anthropic-chat-completions` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/docs/models-overview) | `generate_until` (no logprobs) | | Textsynth | :heavy_check_mark: | `textsynth` | [All supported engines](https://textsynth.com/documentation.html#engines) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | Cohere | [:hourglass: - blocked on Cohere API bug](https://github.com/EleutherAI/lm-evaluation-harness/pull/395) | N/A | [All `cohere.generate()` engines](https://docs.cohere.com/docs/models) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | [Llama.cpp](https://github.com/ggerganov/llama.cpp) (via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) | :heavy_check_mark: | `gguf`, `ggml` | [All models supported by llama.cpp](https://github.com/ggerganov/llama.cpp) | `generate_until`, `loglikelihood`, (perplexity evaluation not yet implemented) | | vLLM | :heavy_check_mark: | `vllm` | [Most HF Causal Language Models](https://docs.vllm.ai/en/latest/models/supported_models.html) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | Mamba | :heavy_check_mark: | `mamba_ssm` | [Mamba architecture Language Models via the `mamba_ssm` package](https://huggingface.co/state-spaces) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | Huggingface Optimum (Causal LMs) | ✔️ | `openvino` | Any decoder-only AutoModelForCausalLM converted with Huggingface Optimum into OpenVINO™ Intermediate Representation (IR) format | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... | | Neuron via AWS Inf2 (Causal LMs) | ✔️ | `neuronx` | Any decoder-only AutoModelForCausalLM supported to run on [huggingface-ami image for inferentia2](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... | | [Neural Magic DeepSparse](https://github.com/neuralmagic/deepsparse) | ✔️ | `deepsparse` | Any LM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub with the "deepsparse" tag](https://huggingface.co/models?other=deepsparse) | `generate_until`, `loglikelihood` | ... | | [Neural Magic SparseML](https://github.com/neuralmagic/sparseml) | ✔️ | `sparseml` | Any decoder-only AutoModelForCausalLM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub](https://huggingface.co/neuralmagic). Especially useful for models with quantization like [`zoo:llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized`](https://sparsezoo.neuralmagic.com/models/llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... | | Your local inference server! | :heavy_check_mark: | `local-completions` or `local-chat-completions` | Support for OpenAI API-compatible servers, with easy customization for other APIs. | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | ... | Models which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while local models, or APIs that supply logprobs/logits of their prompts, can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`. For more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md#interface). > [!Note] > For best performance with closed chat model APIs such as Anthropic Claude 3 and GPT-4, we recommend carefully looking at a few sample outputs using `--limit 10` first to confirm answer extraction and scoring on generative tasks is performing as expected. providing `system="<some system prompt here>"` within `--model_args` for anthropic-chat-completions, to instruct the model what format to respond in, may be useful. ### Other Frameworks A number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py). To create your own custom integration you can follow instructions from [this tutorial](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md#external-library-usage). ### Additional Features > [!Note] > For tasks unsuitable for direct evaluation — either due risks associated with executing untrusted code or complexities in the evaluation process — the `--predict_only` flag is available to obtain decoded generations for post-hoc evaluation. If you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing `--device cuda:0` with `--device mps` (requires PyTorch version 2.1 or higher). **Note that the PyTorch MPS backend is still in early stages of development, so correctness issues or unsupported operations may exist. If you observe oddities in model performance on the MPS back-end, we recommend first checking that a forward pass of your model on `--device cpu` and `--device mps` match.** > [!Note] > You can inspect what the LM inputs look like by running the following command: > ```bash > python write_out.py \ > --tasks <task1,task2,...> \ > --num_fewshot 5 \ > --num_examples 10 \ > --output_base_path /path/to/output/folder > ``` > This will write out one text file for each task. To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag: ```bash lm_eval --model openai \ --model_args engine=davinci \ --tasks lambada_openai,hellaswag \ --check_integrity ``` ## Advanced Usage Tips For models loaded with the HuggingFace `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument: ```bash lm_eval --model hf \ --model_args pretrained=EleutherAI/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \ --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \ --device cuda:0 ``` Models provided as delta weights can be easily loaded using the Hugging Face transformers library. Within --model_args, set the delta argument to specify the delta weights, and use the pretrained argument to designate the relative base model to which they will be applied: ```bash lm_eval --model hf \ --model_args pretrained=Ejafa/llama_7B,delta=lmsys/vicuna-7b-delta-v1.1 \ --tasks hellaswag ``` [GPTQ](https://github.com/PanQiWei/AutoGPTQ) quantized models can be loaded by specifying their file names in `,autogptq=NAME` (or `,autogptq=True` for default names) in the `model_args` argument: ```bash lm_eval --model hf \ --model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \ --tasks hellaswag ``` We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`. ## Saving Results To save evaluation results provide an `--output_path`. We also support logging model responses with the `--log_samples` flag for post-hoc analysis. Additionally, one can provide a directory with `--use_cache` to cache the results of prior runs. This allows you to avoid repeated execution of the same (model, task) pairs for re-scoring. To push results and samples to the Hugging Face Hub, first ensure an access token with write access is set in the `HF_TOKEN` environment variable. Then, use the `--hf_hub_log_args` flag to specify the organization, repository name, repository visibility, and whether to push results and samples to the Hub - [example dataset on the HF Hub](https://huggingface.co/datasets/KonradSzafer/lm-eval-results-demo). For instance: ```bash lm_eval --model hf \ --model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \ --tasks hellaswag \ --log_samples \ --output_path results \ --hf_hub_log_args hub_results_org=EleutherAI,hub_repo_name=lm-eval-results,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False \ ``` This allows you to easily download the results and samples from the Hub, using: ```python from datasets import load_dataset load_dataset("EleutherAI/lm-eval-results-private", "hellaswag", "latest") ``` For a full list of supported arguments, check out the [interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md) guide in our documentation! ## Visualizing Results You can seamlessly visualize and analyze the results of your evaluation harness runs using both Weights & Biases (W&B) and Zeno. ### Zeno You can use [Zeno](https://zenoml.com) to visualize the results of your eval harness runs. First, head to [hub.zenoml.com](https://hub.zenoml.com) to create an account and get an API key [on your account page](https://hub.zenoml.com/account). Add this key as an environment variable: ```bash export ZENO_API_KEY=[your api key] ``` You'll also need to install the `lm_eval[zeno]` package extra. To visualize the results, run the eval harness with the `log_samples` and `output_path` flags. We expect `output_path` to contain multiple folders that represent individual model names. You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno. ```bash lm_eval \ --model hf \ --model_args pretrained=EleutherAI/gpt-j-6B \ --tasks hellaswag \ --device cuda:0 \ --batch_size 8 \ --log_samples \ --output_path output/gpt-j-6B ``` Then, you can upload the resulting data using the `zeno_visualize` script: ```bash python scripts/zeno_visualize.py \ --data_path output \ --project_name "Eleuther Project" ``` This will use all subfolders in `data_path` as different models and upload all tasks within these model folders to Zeno. If you run the eval harness on multiple tasks, the `project_name` will be used as a prefix and one project will be created per task. You can find an example of this workflow in [examples/visualize-zeno.ipynb](examples/visualize-zeno.ipynb). ### Weights and Biases With the [Weights and Biases](https://wandb.ai/site) integration, you can now spend more time extracting deeper insights into your evaluation results. The integration is designed to streamline the process of logging and visualizing experiment results using the Weights & Biases (W&B) platform. The integration provide functionalities - to automatically log the evaluation results, - log the samples as W&B Tables for easy visualization, - log the `results.json` file as an artifact for version control, - log the `<task_name>_eval_samples.json` file if the samples are logged, - generate a comprehensive report for analysis and visualization with all the important metric, - log task and cli specific configs, - and more out of the box like the command used to run the evaluation, GPU/CPU counts, timestamp, etc. First you'll need to install the lm_eval[wandb] package extra. Do `pip install lm_eval[wandb]`. Authenticate your machine with an your unique W&B token. Visit https://wandb.ai/authorize to get one. Do `wandb login` in your command line terminal. Run eval harness as usual with a `wandb_args` flag. Use this flag to provide arguments for initializing a wandb run ([wandb.init](https://docs.wandb.ai/ref/python/init)) as comma separated string arguments. ```bash lm_eval \ --model hf \ --model_args pretrained=microsoft/phi-2,trust_remote_code=True \ --tasks hellaswag,mmlu_abstract_algebra \ --device cuda:0 \ --batch_size 8 \ --output_path output/phi-2 \ --limit 10 \ --wandb_args project=lm-eval-harness-integration \ --log_samples ``` In the stdout, you will find the link to the W&B run page as well as link to the generated report. You can find an example of this workflow in [examples/visualize-wandb.ipynb](examples/visualize-wandb.ipynb), and an example of how to integrate it beyond the CLI. ## How to Contribute or Learn More? For more information on the library and how everything fits together, check out all of our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)! We plan to post a larger roadmap of desired + planned library improvements soon, with more information on how contributors can help. ### Implementing new tasks To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md). In general, we follow this priority list for addressing concerns about prompting and other eval details: 1. If there is widespread agreement among people who train LLMs, use the agreed upon procedure. 2. If there is a clear and unambiguous official implementation, use that procedure. 3. If there is widespread agreement among people who evaluate LLMs, use the agreed upon procedure. 4. If there are multiple common implementations but not universal or widespread agreement, use our preferred option among the common implementations. As before, prioritize choosing from among the implementations found in LLM training papers. These are guidelines and not rules, and can be overruled in special circumstances. We try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from [Language Models are Few Shot Learners](https://arxiv.org/abs/2005.14165) as our original goal was specifically to compare results with that paper. ### Support The best way to get support is to open an issue on this repo or join the [EleutherAI Discord server](https://discord.gg/eleutherai). The `#lm-thunderdome` channel is dedicated to developing this project and the `#release-discussion` channel is for receiving support for our releases. If you've used the library and have had a positive (or negative) experience, we'd love to hear from you! ## Optional Extras Extras dependencies can be installed via `pip install -e ".[NAME]"` | Name | Use | |-----------------|----------------------------------------------| | api | For using api models (Anthropic, OpenAI API) | | deepsparse | For running NM's DeepSparse models | | dev | For linting PRs and contributions | | gptq | For loading models with GPTQ | | hf_transfer | For speeding up HF Hub file downloads | | ifeval | For running the IFEval task | | neuronx | For running on AWS inf2 instances | | mamba | For loading Mamba SSM models | | math | For running math task answer checking | | multilingual | For multilingual tokenizers | | optimum | For running Intel OpenVINO models | | promptsource | For using PromptSource prompts | | sentencepiece | For using the sentencepiece tokenizer | | sparseml | For using NM's SparseML models | | testing | For running library test suite | | vllm | For loading models with vLLM | | zeno | For visualizing results with Zeno | | --------------- | --------------------------------------- | | all | Loads all extras (not recommended) | ## Cite as ``` @misc{eval-harness, author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = 07, year = 2024, publisher = {Zenodo}, version = {v0.4.3}, doi = {10.5281/zenodo.12608602}, url = {https://zenodo.org/records/12608602} } ```
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 39773 }
# TemplateAPI Usage Guide The `TemplateAPI` class is a versatile superclass designed to facilitate the integration of various API-based language models into the lm-evaluation-harness framework. This guide will explain how to use and extend the `TemplateAPI` class to implement your own API models. If your API implements the OpenAI API you can use the `local-completions` or the `local-chat-completions` (defined [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/openai_completions.py)) model types, which can also serve as examples of how to effectively subclass this template. ## Overview The `TemplateAPI` class provides a template for creating API-based model implementations. It handles common functionalities such as: - Tokenization (optional) - Batch processing - Caching - Retrying failed requests - Parsing API responses To use this class, you typically need to subclass it and implement specific methods for your API. ## Key Methods to Implement When subclassing `TemplateAPI`, you need to implement the following methods: 1. `_create_payload`: Creates the JSON payload for API requests. 2. `parse_logprobs`: Parses log probabilities from API responses. 3. `parse_generations`: Parses generated text from API responses. 4. `headers`: Returns the headers for the API request. You may also need to override other methods or properties depending on your API's specific requirements. > [!NOTE] > Currently loglikelihood and MCQ based tasks (such as MMLU) are only supported for completion endpoints. Not for chat-completion — those that expect a list of dicts — endpoints! Completion APIs which support instruct tuned models can be evaluated with the `--apply_chat_template` option in order to simultaneously evaluate models using a chat template format while still being able to access the model logits needed for loglikelihood-based tasks. # TemplateAPI Usage Guide ## TemplateAPI Arguments When initializing a `TemplateAPI` instance or a subclass, you can provide several arguments to customize its behavior. Here's a detailed explanation of some important arguments: - `model` or `pretrained` (str): - The name or identifier of the model to use. - `model` takes precedence over `pretrained` when both are provided. - `base_url` (str): - The base URL for the API endpoint. - `tokenizer` (str, optional): - The name or path of the tokenizer to use. - If not provided, it defaults to using the same tokenizer name as the model. - `num_concurrent` (int): - Number of concurrent requests to make to the API. - Useful for APIs that support parallel processing. - Default is 1 (sequential processing). - `tokenized_requests` (bool): - Determines whether the input is pre-tokenized. Defaults to `True`. - Requests can be sent in either tokenized form (`list[list[int]]`) or as text (`list[str]`, or `str` for batch_size=1). - For loglikelihood-based tasks, prompts require tokenization to calculate the context length. If `False` prompts are decoded back to text before being sent to the API. - Not as important for `generate_until` tasks. - Ignored for chat formatted inputs (list[dict...]) or if tokenizer_backend is None. - `tokenizer_backend` (str, optional): - Required for loglikelihood-based or MCQ tasks. - Specifies the tokenizer library to use. Options are "tiktoken", "huggingface", or None. - Default is "huggingface". - `max_length` (int, optional): - Maximum length of input + output. - Default is 2048. - `max_retries` (int, optional): - Maximum number of retries for failed API requests. - Default is 3. - `max_gen_toks` (int, optional): - Maximum number of tokens to generate in completion tasks. - Default is 256 or set in task yaml. - `batch_size` (int or str, optional): - Number of requests to batch together (if the API supports batching). - Can be an integer or "auto" (which defaults to 1 for API models). - Default is 1. - `seed` (int, optional): - Random seed for reproducibility. - Default is 1234. - `add_bos_token` (bool, optional): - Whether to add the beginning-of-sequence token to inputs (when tokenizing). - Default is False. - `custom_prefix_token_id` (int, optional): - Custom token ID to use as a prefix for inputs. - If not provided, uses the model's default BOS or EOS token (if `add_bos_token` is True). Example usage: ```python class MyAPIModel(TemplateAPI): def __init__(self, **kwargs): super().__init__( model="my-model", base_url="https://api.mymodel.com/v1/completions", tokenizer_backend="huggingface", num_concurrent=5, max_retries=5, batch_size=10, **kwargs ) # Implement other required methods... ``` When subclassing `TemplateAPI`, you can override these arguments in your `__init__` method to set default values specific to your API. You can also add additional (potentially user-specified) arguments as needed for your specific implementation. ## Example Implementation: OpenAI API The `OpenAICompletionsAPI` and `OpenAIChatCompletion` ([here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/openai_completions.py) classes demonstrate how to implement API models using the `TemplateAPI` class. Here's a breakdown of the key components: ### 1. Subclassing and Initialization ```python @register_model("openai-completions") class OpenAICompletionsAPI(LocalCompletionsAPI): def __init__( self, base_url="https://api.openai.com/v1/completions", tokenizer_backend="tiktoken", **kwargs, ): super().__init__( base_url=base_url, tokenizer_backend=tokenizer_backend, **kwargs ) ``` ### 2. Implementing API Key Retrieval ```python @cached_property def api_key(self): key = os.environ.get("OPENAI_API_KEY", None) if key is None: raise ValueError( "API key not found. Please set the OPENAI_API_KEY environment variable." ) return key ``` ### 3. Creating the Payload ```python def _create_payload( self, messages: Union[List[List[int]], List[dict], List[str], str], generate=False, gen_kwargs: Optional[dict] = None, **kwargs, ) -> dict: if generate: # ... (implementation for generation) else: # ... (implementation for log likelihood) ``` ### 4. Parsing API Responses ```python @staticmethod def parse_logprobs( outputs: Union[Dict, List[Dict]], tokens: List[List[int]] = None, ctxlens: List[int] = None, **kwargs, ) -> List[Tuple[float, bool]]: # ... (implementation) @staticmethod def parse_generations(outputs: Union[Dict, List[Dict]], **kwargs) -> List[str]: # ... (implementation) ``` The requests are initiated in the `model_call` or the `amodel_call` methods. ## Implementing Your Own API Model To implement your own API model: 1. Subclass `TemplateAPI` or one of its subclasses (e.g., `LocalCompletionsAPI`). 2. Override the `__init__` method if you need to set specific parameters. 3. Implement the `_create_payload` and `header` methods to create the appropriate payload for your API. 4. Implement the `parse_logprobs` and `parse_generations` methods to parse your API's responses. 5. Override the `api_key` property if your API requires authentication. 6. Override any other methods as necessary to match your API's behavior. ## Best Practices 1. Use the `@register_model` decorator to register your model with the framework (and import it in `lm_eval/models/__init__.py`!). 3. Use environment variables for sensitive information like API keys. 4. Properly handle batching and concurrent requests if supported by your API.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/API_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/API_guide.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 7742 }
# Contributing to LM Evaluation Harness Welcome and thank you for your interest in the LM Evaluation Harness! We welcome contributions and feedback and appreciate your time spent with our library, and hope you find it useful! ## Important Resources There are several places information about LM Evaluation Harness is located: - Our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs) - We occasionally use [GitHub Milestones](https://github.com/EleutherAI/lm-evaluation-harness/milestones) to track progress toward specific near-term version releases. - We maintain a [Project Board](https://github.com/orgs/EleutherAI/projects/25) for tracking current work items and PRs, and for future roadmap items or feature requests. - Further discussion and support conversations are located in the #lm-thunderdome channel of the [EleutherAI discord](https://discord.gg/eleutherai). ## Code Style LM Evaluation Harness uses [ruff](https://github.com/astral-sh/ruff) for linting via [pre-commit](https://pre-commit.com/). You can install linters and dev tools via ```pip install lm_eval[dev]``` or ```pip install -e ".[dev]"``` Then, run ```pre-commit install``` in order to ensure linters and other checks will be run upon committing. ## Testing We use [pytest](https://docs.pytest.org/en/latest/) for running unit tests. All library unit tests can be run via: ``` python -m pytest --showlocals -s -vv -n=auto --ignore=tests/models/test_neuralmagic.py --ignore=tests/models/test_openvino.py ``` ## Contributor License Agreement We ask that new contributors agree to a Contributor License Agreement affirming that EleutherAI has the rights to use your contribution to our library. First-time pull requests will have a reply added by @CLAassistant containing instructions for how to confirm this, and we require it before merging your PR. ## Contribution Best Practices We recommend a few best practices to make your contributions or reported errors easier to assist with. **For Pull Requests:** - PRs should be titled descriptively, and be opened with a brief description of the scope and intent of the new contribution. - New features should have appropriate documentation added alongside them. - Aim for code maintainability, and minimize code copying. - If opening a task, try to share test results on the task using a publicly-available model, and if any public results are available on the task, compare to them. **For Feature Requests:** - Provide a short paragraph's worth of description. What is the feature you are requesting? What is its motivation, and an example use case of it? How does this differ from what is currently supported? **For Bug Reports**: - Provide a short description of the bug. - Provide a *reproducible example*--what is the command you run with our library that results in this error? Have you tried any other steps to resolve it? - Provide a *full error traceback* of the error that occurs, if applicable. A one-line error message or small screenshot snippet is unhelpful without the surrounding context. - Note what version of the codebase you are using, and any specifics of your environment and setup that may be relevant. **For Requesting New Tasks**: - Provide a 1-2 sentence description of what the task is and what it evaluates. - Provide a link to the paper introducing the task. - Provide a link to where the dataset can be found. - Provide a link to a paper containing results on an open-source model on the task, for use in comparisons and implementation validation. - If applicable, link to any codebase that has implemented the task (especially the original publication's codebase, if existent). ## How Can I Get Involved? To quickly get started, we maintain a list of good first issues, which can be found [on our project board](https://github.com/orgs/EleutherAI/projects/25/views/8) or by [filtering GH Issues](https://github.com/EleutherAI/lm-evaluation-harness/issues?q=is%3Aopen+label%3A%22good+first+issue%22+label%3A%22help+wanted%22). These are typically smaller code changes or self-contained features which can be added without extensive familiarity with library internals, and we recommend new contributors consider taking a stab at one of these first if they are feeling uncertain where to begin. There are a number of distinct ways to contribute to LM Evaluation Harness, and all are extremely helpful! A sampling of ways to contribute include: - **Implementing and verifying new evaluation tasks**: Is there a task you'd like to see LM Evaluation Harness support? Consider opening an issue requesting it, or helping add it! Verifying and cross-checking task implementations with their original versions is also a very valuable form of assistance in ensuring standardized evaluation. - **Improving documentation** - Improvements to the documentation, or noting pain points / gaps in documentation, are helpful in order for us to improve the user experience of the library and clarity + coverage of documentation. - **Testing and devops** - We are very grateful for any assistance in adding tests for the library that can be run for new PRs, and other devops workflows. - **Adding new modeling / inference library integrations** - We hope to support a broad range of commonly-used inference libraries popular among the community, and welcome PRs for new integrations, so long as they are documented properly and maintainable. - **Proposing or Contributing New Features** - We want LM Evaluation Harness to support a broad range of evaluation usecases. If you have a feature that is not currently supported but desired, feel free to open an issue describing the feature and, if applicable, how you intend to implement it. We would be happy to give feedback on the cleanest way to implement new functionalities and are happy to coordinate with interested contributors via GH discussions or via discord. We hope that this has been helpful, and appreciate your interest in contributing! Further questions can be directed to [our Discord](discord.gg/eleutherai).
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/CONTRIBUTING.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/CONTRIBUTING.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6071 }
# Eval Harness Documentation Welcome to the docs for the LM Evaluation Harness! ## Table of Contents * To learn about the public interface of the library, as well as how to evaluate via the command line or as integrated into an external library, see the [Interface](./interface.md). * To learn how to add a new library, API, or model type to the library, as well as a quick explainer on the types of ways to evaluate an LM, see the [Model Guide](./model_guide.md). * For an extended description of how to extend the library to new model classes served over an API, see the [API Guide](./API_guide.md). * For a crash course on adding new tasks to the library, see our [New Task Guide](./new_task_guide.md). * To learn more about pushing the limits of task configuration that the Eval Harness supports, see the [Task Configuration Guide](./task_guide.md).
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 858 }
# Decontamination ## Usage The provided directory should contain the ngram files and info.json produced in "Pile Ngram Generation" further down. ```bash python -m lm_eval \ --model gpt2 \ --device 0 \ --tasks sciq ``` ## Background Downstream evaluations test model generalization, and are less useful when test set data also exists in the training set, referred to as leakage or contamination. Filtering your training set against the test set is a good first step, however this isn't always possible, as in the case of a new benchmark or one that wasn't considered prior to model training. When training set filtering isn't possible, it is useful to measure the impact of test set leakage by detecting the contaminated test examples and producing a clean version of the benchmark. The basis for our decontamination procedure can be found in Appendix C of "Language Models are Few-Shot Learners". OpenAI defined a test document as contaminated if any N-gram overlap existed with any training document. They used a range of N values between 8 and 13 depending on dataset, while we just used 13 for simplicity. ## Implementation Contamination detection can be found in `lm_eval/decontaminate.py` with supporting code in `lm_eval/decontamination/`. decontaminate.py does the following: 1. Build dictionaries of all ngrams and their corresponding evaluation/document ids. 2. Scan through sorted files containing training set n-grams. 3. If a match is found, the corresponding evaluation/document combinations are marked as contaminated. `lm_eval/evaluator.py` can then produce a clean version of the benchmark by excluding the results of contaminated documents. For each metric, a clean version will be shown in the results with a "decontaminate" suffix. This is disabled by default for new tasks, to support decontamination on a task override the "should_decontaminate" and "doc_to_decontamination_query" methods. For more details see the [task guide](task_guide.md). ## Pile Ngram Generation The relevant scripts can be found in `scripts/clean_training_data`, which also import from `lm_eval/decontamination/` 1. git clone https://github.com/EleutherAI/lm-evaluation-harness.git 2. pip install -r requirements.txt 3. Download The Pile from [The Eye](https://the-eye.eu/public/AI/pile/train/) 4. Place pile files in "pile" directory under "lm-evaluation-harness" (or create a symlink) 5. Run generate_13_grams. ```bash export PYTHONHASHSEED=0 python -m scripts/clean_training_data/generate_13_grams \ -dir path/to/working/directory \ -n 13 \ -buckets 500 ``` Took approximately 4 days for us. We had the time to wait, but this could be scaled out by doing partial pile scans on multiple instances of this script and merging the relevant buckets. We fixed PYTHONHASHSEED to ensure reproducibility of bucket hashing in case you need to stop and start. 6. Sort the generated 13-grams. ```bash python -m scripts/clean_training_data/sort_13_gram_buckets \ -dir path/to/working/directory/output ``` Took approximately 5 days for us. You could speed this up by spreading the files around to different machines and running the sort script before gathering them together. 7. Compress the sorted 13 grams files and place them together with info.json. This step only takes a few hours. ```bash python -m scripts/clean_training_data/compress_and_package \ -dir path/to/working/directory \ -output path/to/final/directory \ -procs 8 ```
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/decontamination.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/decontamination.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3500 }
# User Guide This document details the interface exposed by `lm-eval` and provides details on what flags are available to users. ## Command-line Interface A majority of users run the library by cloning it from Github, installing the package as editable, and running the `python -m lm_eval` script. Equivalently, running the library can be done via the `lm-eval` entrypoint at the command line. This mode supports a number of command-line arguments, the details of which can be also be seen via running with `-h` or `--help`: - `--model` : Selects which model type or provider is evaluated. Must be a string corresponding to the name of the model type/provider being used. See [the main README](https://github.com/EleutherAI/lm-evaluation-harness/tree/main#model-apis-and-inference-servers) for a full list of enabled model names and supported libraries or APIs. - `--model_args` : Controls parameters passed to the model constructor. Accepts a string containing comma-separated keyword arguments to the model class of the format `"arg1=val1,arg2=val2,..."`, such as, for example `--model_args pretrained=EleutherAI/pythia-160m,dtype=float32`. For a full list of what keyword arguments, see the initialization of the `lm_eval.api.model.LM` subclass, e.g. [`HFLM`](https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/models/huggingface.py#L66) - `--tasks` : Determines which tasks or task groups are evaluated. Accepts a comma-separated list of task names or task group names. Must be solely comprised of valid tasks/groups. A list of supported tasks can be viewed with `--tasks list`. - `--num_fewshot` : Sets the number of few-shot examples to place in context. Must be an integer. - `--gen_kwargs` : takes an arg string in same format as `--model_args` and creates a dictionary of keyword arguments. These will be passed to the models for all called `generate_until` (free-form or greedy generation task) tasks, to set options such as the sampling temperature or `top_p` / `top_k`. For a list of what args are supported for each model type, reference the respective library's documentation (for example, the documentation for `transformers.AutoModelForCausalLM.generate()`.) These kwargs will be applied to all `generate_until` tasks called--we do not currently support unique gen_kwargs or batch_size values per task in a single run of the library. To control these on a per-task level, set them in that task's YAML file. - `--batch_size` : Sets the batch size used for evaluation. Can be a positive integer or `"auto"` to automatically select the largest batch size that will fit in memory, speeding up evaluation. One can pass `--batch_size auto:N` to re-select the maximum batch size `N` times during evaluation. This can help accelerate evaluation further, since `lm-eval` sorts documents in descending order of context length. - `--max_batch_size` : Sets the maximum batch size to try to fit in memory, if `--batch_size auto` is passed. - `--device` : Sets which device to place the model onto. Must be a string, for example, `"cuda", "cuda:0", "cpu", "mps"`. Defaults to "cuda", and can be ignored if running multi-GPU or running a non-local model type. - `--output_path` : A string of the form `dir/file.jsonl` or `dir/`. Provides a path where high-level results will be saved, either into the file named or into the directory named. If `--log_samples` is passed as well, then per-document outputs and metrics will be saved into the directory as well. - `--log_samples` : If this flag is passed, then the model's outputs, and the text fed into the model, will be saved at per-document granularity. Must be used with `--output_path`. - `--limit` : Accepts an integer, or a float between 0.0 and 1.0 . If passed, will limit the number of documents to evaluate to the first X documents (if an integer) per task or first X% of documents per task. Useful for debugging, especially on costly API models. - `--use_cache` : Should be a path where a sqlite db file can be written to. Takes a string of format `/path/to/sqlite_cache_` in order to create a cache db at `/path/to/sqlite_cache_rank{i}.db` for each process (0-NUM_GPUS). This allows results of prior runs to be cached, so that there is no need to re-run results in order to re-score or re-run a given (model, task) pair again. - `--cache_requests` : Can be "true", "refresh", or "delete". "true" means that the cache should be used. "refresh" means that you wish to regenerate the cache, which you should run if you change your dataset configuration for a given task. "delete" will delete the cache. Cached files are stored under lm_eval/cache/.cache unless you specify a different path via the environment variable: `LM_HARNESS_CACHE_PATH`. e.g. `LM_HARNESS_CACHE_PATH=~/Documents/cache_for_lm_harness`. - `--check_integrity` : If this flag is used, the library tests for each task selected are run to confirm task integrity. - `--write_out` : Used for diagnostic purposes to observe the format of task documents passed to a model. If this flag is used, then prints the prompt and gold target string for the first document of each task. - `--show_config` : If used, prints the full `lm_eval.api.task.TaskConfig` contents (non-default settings the task YAML file) for each task which was run, at the completion of an evaluation. Useful for when one is modifying a task's configuration YAML locally to transmit the exact configurations used for debugging or for reproducibility purposes. - `--include_path` : Accepts a path to a folder. If passed, then all YAML files containing `lm-eval` compatible task configurations will be added to the task registry as available tasks. Used for when one is writing config files for their own task in a folder other than `lm_eval/tasks/`. - `--system_instruction`: Specifies a system instruction string to prepend to the prompt. - `--apply_chat_template` : This flag specifies whether to apply a chat template to the prompt. It can be used in the following ways: - `--apply_chat_template` : When used without an argument, applies the only available chat template to the prompt. For Hugging Face models, if no dedicated chat template exists, the default chat template will be applied. - `--apply_chat_template template_name` : If the model has multiple chat templates, apply the specified template to the prompt. For Hugging Face models, the default chat template can be found in the [`default_chat_template`](https://github.com/huggingface/transformers/blob/fc35907f95459d7a6c5281dfadd680b6f7b620e3/src/transformers/tokenization_utils_base.py#L1912) property of the Transformers Tokenizer. - `--fewshot_as_multiturn` : If this flag is on, the Fewshot examples are treated as a multi-turn conversation. Questions are provided as user content and answers are provided as assistant responses. Requires `--num_fewshot` to be set to be greater than 0, and `--apply_chat_template` to be on. - `--predict_only`: Generates the model outputs without computing metrics. Use with `--log_samples` to retrieve decoded results. * `--seed`: Set seed for python's random, numpy and torch. Accepts a comma-separated list of 3 values for python's random, numpy, and torch seeds, respectively, or a single integer to set the same seed for all three. The values are either an integer or 'None' to not set the seed. Default is `0,1234,1234` (for backward compatibility). E.g. `--seed 0,None,8` sets `random.seed(0)` and `torch.manual_seed(8)`. Here numpy's seed is not set since the second value is `None`. E.g, `--seed 42` sets all three seeds to 42. * `--wandb_args`: Tracks logging to Weights and Biases for evaluation runs and includes args passed to `wandb.init`, such as `project` and `job_type`. Full list [here](https://docs.wandb.ai/ref/python/init). e.g., ```--wandb_args project=test-project,name=test-run``` * `--hf_hub_log_args` : Logs evaluation results to Hugging Face Hub. Accepts a string with the arguments separated by commas. Available arguments: * `hub_results_org` - organization name on Hugging Face Hub, e.g., `EleutherAI`. If not provided, the results will be pushed to the owner of the Hugging Face token, * `hub_repo_name` - repository name on Hugging Face Hub (deprecated, `details_repo_name` and `results_repo_name` should be used instead), e.g., `lm-eval-results`, * `details_repo_name` - repository name on Hugging Face Hub to store details, e.g., `lm-eval-results`, * `results_repo_name` - repository name on Hugging Face Hub to store results, e.g., `lm-eval-results`, * `push_results_to_hub` - whether to push results to Hugging Face Hub, can be `True` or `False`, * `push_samples_to_hub` - whether to push samples results to Hugging Face Hub, can be `True` or `False`. Requires `--log_samples` to be set, * `public_repo` - whether the repository is public, can be `True` or `False`, * `leaderboard_url` - URL to the leaderboard, e.g., `https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard`. * `point_of_contact` - Point of contact for the results dataset, e.g., `[email protected]`. * `gated` - whether to gate the details dataset, can be `True` or `False`. ## External Library Usage We also support using the library's external API for use within model training loops or other scripts. `lm_eval` supplies two functions for external import and use: `lm_eval.evaluate()` and `lm_eval.simple_evaluate()`. `simple_evaluate()` can be used by simply creating an `lm_eval.api.model.LM` subclass that implements the methods described in the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs/model_guide.md), and wrapping your custom model in that class as follows: ```python import lm_eval ... my_model = initialize_my_model() # create your model (could be running finetuning with some custom modeling code) ... # instantiate an LM subclass that takes your initialized model and can run # - `Your_LM.loglikelihood()` # - `Your_LM.loglikelihood_rolling()` # - `Your_LM.generate_until()` lm_obj = Your_LM(model=my_model, batch_size=16) # indexes all tasks from the `lm_eval/tasks` subdirectory. # Alternatively, you can set `TaskManager(include_path="path/to/my/custom/task/configs")` # to include a set of tasks in a separate directory. task_manager = lm_eval.tasks.TaskManager() # Setting `task_manager` to the one above is optional and should generally be done # if you want to include tasks from paths other than ones in `lm_eval/tasks`. # `simple_evaluate` will instantiate its own task_manager if it is set to None here. results = lm_eval.simple_evaluate( # call simple_evaluate model=lm_obj, tasks=["taskname1", "taskname2"], num_fewshot=0, task_manager=task_manager, ... ) ``` See the `simple_evaluate()` and `evaluate()` functions in [lm_eval/evaluator.py](../lm_eval/evaluator.py#:~:text=simple_evaluate) for a full description of all arguments available. All keyword arguments to simple_evaluate share the same role as the command-line flags described previously. Additionally, the `evaluate()` function offers the core evaluation functionality provided by the library, but without some of the special handling and simplification + abstraction provided by `simple_evaluate()`. As a brief example usage of `evaluate()`: ```python import lm_eval # suppose you've defined a custom lm_eval.api.Task subclass in your own external codebase from my_tasks import MyTask1 ... # create your model (could be running finetuning with some custom modeling code) my_model = initialize_my_model() ... # instantiate an LM subclass that takes your initialized model and can run # - `Your_LM.loglikelihood()` # - `Your_LM.loglikelihood_rolling()` # - `Your_LM.generate_until()` lm_obj = Your_LM(model=my_model, batch_size=16) # optional: the task_manager indexes tasks including ones # specified by the user through `include_path`. task_manager = lm_eval.tasks.TaskManager( include_path="/path/to/custom/yaml" ) # To get a task dict for `evaluate` task_dict = lm_eval.tasks.get_task_dict( [ "mmlu", # A stock task "my_custom_task", # A custom task { "task": ..., # A dict that configures a task "doc_to_text": ..., }, MyTask1 # A task object from `lm_eval.task.Task` ], task_manager # A task manager that allows lm_eval to # load the task during evaluation. # If none is provided, `get_task_dict` # will instantiate one itself, but this # only includes the stock tasks so users # will need to set this if including # custom paths is required. ) results = evaluate( lm=lm_obj, task_dict=task_dict, ... ) ```
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/interface.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/interface.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 12830 }
# New Model Guide This guide may be of special interest to users who are using the library outside of the repository, via installing the library via pypi and calling `lm_eval.evaluator.evaluate()` to evaluate an existing model. In order to properly evaluate a given LM, we require implementation of a wrapper class subclassing the `lm_eval.api.model.LM` class, that defines how the Evaluation Harness should interface with your model. This guide walks through how to write this `LM` subclass via adding it to the library! ## Setup To get started contributing, go ahead and fork the main repo, clone it, create a branch with the name of your model, and install the project requirements in your environment: ```sh # After forking... git clone https://github.com/<YOUR-USERNAME>/lm-evaluation-harness.git cd lm-evaluation-harness git checkout -b <model-type> pip install -e ".[dev]" ``` Now, we'll create a new file where we'll be adding our model: ```sh touch lm_eval/models/<my_model_filename>.py ``` **Tip: this filename should not shadow package names! For example, naming your file `anthropic.py` is disallowed since the API's name on pypi is `anthropic`, but naming it `anthropic_llms.py` works with no problems.** ## Interface All models must subclass the `lm_eval.api.model.LM` class. The LM class enforces a common interface via which we can extract responses from a model: ```python class MyCustomLM(LM): #... def loglikelihood(self, requests: list[Instance]) -> list[tuple[float, bool]]: #... def loglikelihood_rolling(self, requests: list[Instance]) -> list[tuple[float, bool]]: #... def generate_until(self, requests: list[Instance]) -> list[str]: #... #... ``` Where `Instance` is a dataclass defined in [`lm_eval.api.instance`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/api/instance.py) with property `args` of request-dependent type signature described below. We support three types of requests, consisting of different interactions / measurements with an autoregressive LM. All three request types take as input `requests` of type `list[Instance]` that have a matching `Instance.request_type` to the method name. - `generate_until` - Each request contains `Instance.args : Tuple[str, dict]` containing 1. an input string to the LM and 2. a dictionary of keyword arguments used to control generation parameters. - Using this input and these generation parameters, text will be sampled from the language model (typically until a maximum output length or specific stopping string sequences--for example, `{"until": ["\n\n", "."], "max_gen_toks": 128}`). - The generated input+output text from the model will then be returned. - `loglikelihood` - Each request contains `Instance.args : Tuple[str, str]` containing 1. an input string to the LM and 2. a target string on which the loglikelihood of the LM producing this target, conditioned on the input, will be returned. - Each request will have, as result, `(ll, is_greedy): Tuple[float, int]` returned, where `ll` is a floating point number representing the log probability of generating the target string conditioned on the input, and `is_greedy` being either the value `0` or `1`, with it being `1` if and only if the target string *would be generated by greedy sampling from the LM* (that is, if the target string is the *most likely* N-token string to be output by the LM given the input. ) - `loglikelihood_rolling` - Each request contains `Instance.args : Tuple[str]`, which is an input string to the model whose *entire* loglikelihood, conditioned on purely the EOT token, will be calculated. - This is used to evaluate *perplexity* on a data distribution. - It should return `(ll,) : Tuple[float]` , a.k.a. solely the *loglikelihood* of producing each piece of text given no starting input. To allow a model to be evaluated on all types of tasks, you will need to implement these three types of measurements (note that `loglikelihood_rolling` is a special case of `loglikelihood`). For a reference implementation, check out `lm_eval/models/huggingface.py` ! Additionally, check out `lm_eval.api.model.TemplateLM` for a class that abstracts away some commonly used functions across LM subclasses, or see if your model would lend itself well to subclassing the `lm_eval.models.huggingface.HFLM` class and overriding just the initialization or a couple methods! **Tip: be careful of indexing in loglikelihood!** LMs take in tokens in position `[0 1 2 ... N]` and output a probability distribution for token position `N+1`. We provide a simplified graphic here, excerpted from `huggingface.py`: ``` # how this all works (illustrated on a causal decoder-only setup): # CTX CONT # inp 0 1 2 3|4 5 6 7 8 9 <- last token is deleted by inp[:, :-1] # model \ \ # logits 1 2 3|4 5 6 7 8 9 <- the ctx half gets tossed out by the # cont_toks 4 5 6 7 8 9 [:, -len(continuation_enc):, :self.vocab_size] slice ``` The final token of the target is not passed into the LM, because we want the LM's predictions *up to but not past* that final target token. For more information, check out https://github.com/EleutherAI/lm-evaluation-harness/issues/942 . ## Registration Congrats on implementing your model! Now it's time to test it out. To make your model usable via the command line interface to `lm-eval` using `python -m lm_eval`, you'll need to tell `lm-eval` what your model's name is. This is done via a *decorator*, `lm_eval.api.registry.register_model`. Using `register_model()`, one can both tell the package what the model's name(s) to be used are when invoking it with `python -m lm_eval --model <name>` and alert `lm-eval` to the model's existence. ```python from lm_eval.api.registry import register_model @register_model("<name1>", "<name2>") class MyCustomLM(LM): ``` Using this decorator results in the class being added to an accounting of the usable LM types maintained internally to the library at `lm_eval.api.registry.MODEL_REGISTRY`. See `lm_eval.api.registry` for more detail on what sorts of registries and decorators exist in the library! **Tip: be sure to import your model in `lm_eval/models/__init__.py!`** ## Testing We also recommend that new model contributions be accompanied by short tests of their 3 core functionalities, at minimum. To see an example of such tests, look at https://github.com/EleutherAI/lm-evaluation-harness/blob/35bdecd379c0cefad6897e67db892f4a6026a128/tests/test_ggml.py . ## Chat Templating Many models are fine-tuned with a [Chat Template](https://huggingface.co/docs/transformers/main/en/chat_templating) in order to enable back-and-forth interaction between a "User"'s queries and the model (often called "Assistant")'s responses. It can be desirable to evaluate fine-tuned models on evaluation tasks while wrapped in the conversational format they expect. In order to make your model optionally compatible with a chat format, three additional methods must be implemented: ```python class MyCustomLM(LM): #... @property def tokenizer_name(self) -> str: """ Return the name of the model's tokenizer and/or the accompanying chat template. The returned string is used to cache requests. Returns: str: The name of the model's tokenizer and/or chat template. """ def chat_template(self, chat_template: Union[bool, str] = False) -> str: """ Get the appropriate chat template for the model based on the `chat_template` argument. This method returns the chat template string to build the prompt from a chat history. The chat template is saved in the evaluation results for reproducibility. Boolean arguments should be used with models that have only one chat template, while string arguments are used with models that have multiple chat templates. For the reference implementation, see HFLM class in `lm_eval.models.huggingface`. Args: chat_template (Union[bool, str]): Specifies whether to apply a chat template: - If False: Do not apply any chat template. - If True: Apply the default chat template. - If str: Apply the specified chat template by name. Returns: str: The selected chat template in Jinja format. """ def apply_chat_template(self, chat_history: List[Dict[str, str]]) -> str: """ Process a chat history to create a string that can be tokenized and input into the model. Args: chat_history (List[Dict[str, str]]): A list of dictionaries representing the chat history, where each dictionary has "role" and "content" keys. Returns: str: A string representing the chat history that can be tokenized and fed into the model. """ ``` - `apply_chat_template` - This method performs the bulk of the work required for chat-formatting. - As input, a `chat_history: List[Dict[str, str]]` is passed in. This is a transcript of a conversation of a form similar to ``` [ {"system": <user-provided system message such as "You are a helpful math-focused chatbot">}, {"user": <task example - a few-shot example 'input'>} {"assistant": <correct response to the above example>}, # ... more few-shot examples, potentially {"user": <test set query--response on which we will evaluate>}, ] ``` which can then be converted into a string input. - The output is a string representing this conversation that can be fed into the model. - For example, this consists of simply calling `tokenizer.apply_chat_template` for HFLM--see the implementation there for reference. - `tokenizer_name` - LM Eval Harness supports [caching requests](https://github.com/EleutherAI/lm-evaluation-harness/blob/4902aaaf1f374682f95ac25fe2e13b23faddc91a/lm_eval/__main__.py#L140) that are sent to a model, for faster setup when repeating an already-performed evaluation. - However, we don't want to use the cache of chat transcripts rendered using one chat template or system prompt to send to a model with a different template! So, we use this `lm.tokenizer_name` string to distinguish caches for a given model (and chat template) from one another. - `chat_template` - Chat templates are typically provided as a Jinja template string or a string formatted with str.format to include user and assistant messages in a single prompt. This template string is saved in the evaluation results to ensure reproducibility. If not implemented for a given model type, the flags `--apply_chat_template` , `--fewshot_as_multiturn`, and `--system_instruction` cannot be used. ## Other **Pro tip**: In order to make the Evaluation Harness overestimate total runtimes rather than underestimate it, HuggingFace models come in-built with the ability to provide responses on data points in *descending order by total input length* via `lm_eval.utils.Reorderer`. Take a look at `lm_eval.models.hf_causal.HFLM` to see how this is done, and see if you can implement it in your own model! ## Conclusion After reading this guide, you should be able to add new model APIs or implementations to the Eval Harness library!
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/model_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/model_guide.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 11342 }
# New Task Guide `lm-evaluation-harness` is a framework that strives to support a wide range of zero- and few-shot evaluation tasks on autoregressive language models (LMs). This documentation page provides a walkthrough to get started creating your own task, in `lm-eval` versions v0.4.0 and later. A more interactive tutorial is available as a Jupyter notebook [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/examples/lm-eval-overview.ipynb). ## Setup If you haven't already, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment: ```sh # After forking... git clone https://github.com/<YOUR-USERNAME>/lm-evaluation-harness.git cd lm-evaluation-harness git checkout -b <task-name> pip install -e ".[dev]" ``` In this document, we'll walk through the basics of implementing a static benchmark evaluation in two formats: a *generative* task which requires sampling text from a model, such as [`gsm8k`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k.yaml), and a *discriminative*, or *multiple choice*, task where the model picks the most likely of several fixed answer choices, such as [`sciq`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/sciq/sciq.yaml). ## Creating a YAML file To implement a new standard task, we'll need to write a YAML file which configures our task logic. We start by making a new empty YAML file. This file can have any name, but we recommend placing it in a subfolder of `lm_eval/tasks` titled by the dataset or task's shorthand name: for example, ```sh touch lm_eval/tasks/<dataset_name>/<my_new_task_name>.yaml ``` Or, copy the template subfolder we provide from `templates/new_yaml_task`: ```sh cp -r templates/new_yaml_task lm_eval/tasks/ ``` and rename the folders and YAML file(s) as desired. ### Selecting and configuring a dataset All data downloading and management is handled through the HuggingFace (**HF**) [`datasets`](https://github.com/huggingface/datasets) API. So, the first thing you should do is check to see if your task's dataset is already provided in their catalog [here](https://huggingface.co/datasets). If it's not in there, please consider adding it to their Hub to make it accessible to a wider user base by following their [new dataset guide](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md) . Once you have a HuggingFace dataset prepared for your task, we want to assign our new YAML to use this dataset: ```yaml dataset_path: ... # the name of the dataset on the HF Hub. dataset_name: ... # the dataset configuration to use. Leave `null` if your dataset does not require a config to be passed. See https://huggingface.co/docs/datasets/load_hub#configurations for more info. dataset_kwargs: null # any extra keyword arguments that should be passed to the dataset constructor, e.g. `data_dir`. ``` Next, we'd like to tell our task what the dataset's train, validation, and test splits are named, if they exist: ```yaml training_split: <split name of training set, or `null`> validation_split: <split name of val. set, or `null`> test_split: <split name of test set, or `null`> ``` Tests will run on the `test_split` if it is available, and otherwise evaluate on the `validation_split`. We can also specify from which split the task should retrieve few-shot examples via: ```yaml fewshot_split: <split name to draw fewshot examples from, or `null`> ``` or by hardcoding them, either using the following in the yaml file: ```yaml fewshot_config: sampler: first_n samples: [ {<sample 1>}, {<sample 2>}, ] ``` or by adding the function `list_fewshot_samples` in the associated utils.py file: ```python def list_fewshot_samples() -> list[dict]: return [{<sample 1>}, {<sample 2>}] ``` See `lm_eval/tasks/minerva_math/minerva_math_algebra.yaml` for an example of the latter, and `lm_eval/tasks/gsm8k/gsm8k-cot.yaml` for an example of the former. In this case, each sample must contain the same fields as the samples in the above sets--for example, if `doc_to_text` expects an `input` field when rendering input prompts, these provided samples must include an `input` key. If neither above options are not set, we will default to train/validation/test sets, in that order. Finally, our dataset may not be already in the exact format we want. Maybe we have to strip whitespace and special characters via a regex from our dataset's "question" field! Or maybe we just want to rename its columns to match a convention we'll be using for our prompts. Let's create a python file in the directory where we're writing our YAML file: ```bash touch lm_eval/tasks/<dataset_name>/utils.py ``` Now, in `utils.py` we'll write a function to process each split of our dataset: TODO: Change the example to one that's in the tasks/ ```python def process_docs(dataset: datasets.Dataset): def _helper(doc): # modifies the contents of a single # document in our dataset. doc["choices"] = [doc["choice1"], doc["choice2"], doc["wrong_answer"]] doc["gold"] = doc["label"] return doc return dataset.map(_helper) # returns back a datasets.Dataset object ``` Now, in our YAML config file we'll use the `!function` constructor, and tell the config where our imported Python function will come from. At runtime, before doing anything else we will preprocess our dataset according to this function! ```yaml process_docs: !function utils.process_docs ``` ### Using Local Datasets To load a local dataset for evaluation, you can specify data files in the `dataset_kwargs` field, such as the following for JSON files: ``` dataset_path: json dataset_name: null dataset_kwargs: data_files: /path/to/my/json ``` Or with files already split into separate directories: ``` dataset_path: arrow dataset_kwargs: data_files: train: /path/to/arrow/train/data-00000-of-00001.arrow validation: /path/to/arrow/validation/data-00000-of-00001.arrow ``` Alternatively, if you have previously downloaded a dataset from huggingface hub (using `save_to_disk()`) and wish to use the local files, you will need to use `data_dir` under `dataset_kwargs` to point to where the directory is. ``` dataset_path: hellaswag dataset_kwargs: data_dir: hellaswag_local/ ``` You can also set `dataset_path` as a directory path in your local system. This will assume that there is a loading script with the same name as the directory. [See datasets docs](https://huggingface.co/docs/datasets/loading#local-loading-script). ## Writing a Prompt Template The next thing we need to do is decide what format to use when presenting the data to the LM. This is our **prompt**, where we'll define both an input and output format. To write a prompt, users will use `doc_to_text`, `doc_to_target`, and `doc_to_choice` (Optional when certain conditions are met). `doc_to_text` defines the input string a model will be given while `doc_to_target` and `doc_to_choice` will be used to generate the target text. `doc_to_target` can be either a text string that refers to the target string or an integer that refers to the index of the correct label. When it is set as an index, `doc_to_choice` must be also be set with the appropriate list of possible choice strings. ### Basic prompts If a dataset is straightforward enough, users can enter the feature name directly. This assumes that no preprocessing is required. For example in [Swag](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/swag/swag.yaml#L10-L11), `doc_to_text` and `doc_to_target` given the name of one of the feature each. ```yaml doc_to_text: startphrase doc_to_target: label ``` Hard-coding is also possible as is the case in [SciQ](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/sciq/sciq.yaml#L11). ```yaml doc_to_target: 3 ``` `doc_to_choice` can be directly given a list of text as option (See [Toxigen](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/toxigen/toxigen.yaml#L11)) ```yaml doc_to_choice: ['No', 'Yes'] ``` if a dataset feature is already a list, you can set the name of the feature as `doc_to_choice` (See [Hellaswag](https://github.com/EleutherAI/lm-evaluation-harness/blob/e0eda4d3ffa10e5f65e0976161cd134bec61983a/lm_eval/tasks/hellaswag/hellaswag.yaml#L13)) ``` doc_to_choice: choices ``` ### Writing a prompt with Jinja 2 We support the [Jinja 2](https://jinja.palletsprojects.com/en/3.1.x/) templating language for writing prompts. In practice, this means you can take your dataset's columns and do many basic string manipulations to place each document into prompted format. Take for example the dataset `super_glue/boolq`. As input, we'd like to use the features `passage` and `question` and string them together so that for a a sample line `doc`, the model sees something the format of: ``` doc["passage"] Question: doc["question"]? Answer: ``` We do this by [writing](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/super_glue/boolq/default.yaml#L9C1-L9C61) ```yaml doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:" ``` Such that `{{passage}}` will be replaced by `doc["passage"]` and `{{question}}` with `doc["question"]` when rendering the prompt template. Our intended output is for the model to predict a single whitespace, and then the answer to the question. We do this via: ```yaml doc_to_target: "{{answer}}" ``` **Important**: we now add `target_delimiter` between input and target which defaults to " ", such that the full input-output string is `doc_to_target(doc) + target_delimiter + doc_to_text(doc)`. `doc_to_text` and `doc_to_target` should not contain trailing right or left whitespace, respectively. #### Multiple choice format For tasks which are multiple choice (a fixed, finite set of label words per each document) and evaluated via comparing loglikelihoods of all label words (the `multiple_choice` task output type) we enforce a particular convention on prompt format. An annotated example in the case of SciQ is as follows: ```yaml doc_to_text: "{{support.lstrip()}}\nQuestion: {{question}}\nAnswer:" # This is the input portion of the prompt for this doc. It will have " {{choice}}" appended to it as target for each choice in answer_choices. doc_to_target: 3 # this contains the index into the answer choice list of the correct answer. doc_to_choice: "{{[distractor1, distractor2, distractor3, correct_answer]}}" ``` Task implementers are thus able to decide what the answer choices should be for a document, and what prompt format to use. The label index can also be sourced from a feature directly. For example in `superglue/boolq`, the label index if defined in the feature `label`. We can set `doc_to_target` as simply `label`. The options or verbalizers can be written in a the form of a list `["no", "yes"]` that will correspond to the label index. ```yaml doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:" doc_to_target: label doc_to_choice: ["no", "yes"] ``` ### Using Python Functions for Prompts There may be cases where the prompt we want to implement is easier expressed in Python instead of Jinja 2. For this, we can use Python helper functions that are defined in the YAML config. It should be noted that the function script must be in the same directory as the yaml. A good example is WikiText that requires a lot of regex rules to clean the samples. ``` def wikitext_detokenizer(doc): string = doc["page"] # contractions string = string.replace("s '", "s'") string = re.sub(r"/' [0-9]/", r"/'[0-9]/", string) ... string = string.replace(" 's", "'s") return string ``` We can load this function in `doc_to_target` by using a `!function` operator after `doc_to_target` and followed by `<file name>.<function name>`. In the file [wikitext.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/wikitext/wikitext.yaml) we write: ``` doc_to_target: !function preprocess_wikitext.wikitext_detokenizer ``` ### Importing a Prompt from Promptsource [Promptsource](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource) is a great repository for crowdsourced prompts for many datasets. We can load these prompts easily by using the `use_prompt` argument and filling it with the format `"promptsource:<name of prompt template>"`. To use this, `doc_to_text` and `doc_to_target` should be left undefined. This will fetch the template of the dataset defined in the YAML file. For example, For Super Glue BoolQ, if we want to use the prompt template `GPT-3 Style` we can add this to the YAML file. ``` use_prompt: "promptsource:GPT-3 Style" ``` If you would like to run evaluation on all prompt templates, you can simply call it this way. ``` use_prompt: "promptsource:*" ``` ### Setting metrics You're almost done! Now we need to choose how to score our task. - *If this is a multiple choice task:* do you just want to check your model's accuracy in choosing the correct answer choice? - *If this is a generation task:* do you just want to check how often your model outputs *exactly the ground-truth output string provided*? If the answer to the above is no: you'll need to record what scoring metrics to use! Metrics can be listed in the following format: ```yaml metric_list: - metric: <name of the metric here> aggregation: <name of the aggregation fn here> higher_is_better: <true or false> - metric: !function script.function aggregation: ... higher_is_better: ... ``` `aggregation` and `higher_is_better` can optionally be left out to default to the manually-set defaults if using a natively supported metric, otherwise it must be defined explicitly (for example, when using a custom metric implemented as a function). For a full list of natively supported metrics and aggregation functions see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md). All metrics supported in [HuggingFace Evaluate](https://github.com/huggingface/evaluate/tree/main/metrics) can also be used, and will be loaded if a given metric name is not one natively supported in `lm-eval` or `hf_evaluate` is set to `true`. ### Optional, More Advanced Setup Some tasks may require more advanced processing logic than is described in this guide. As a heuristic check: * Does your task require generating multiple free-form outputs per input document? * Does your task require complex, multi-step post-processing of generated model outputs? * Does your task require subsetting documents on the fly based on their content? * Do you expect to compute metrics after applying multiple such processing steps on your model outputs? * Does your task rely on metrics that need a custom implementation? For more detail on the task system and advanced features, see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md) . If none of the above sound like they apply to your task, it's time to continue onto checking your task performance! ### Task name + tags (registering a task) To test a task conveniently, it helps to *register* the task--that is, to give it a name and make the `lm-eval` library aware it exists! If you're writing your YAML file inside the `lm_eval/tasks` folder, you just need to give your task a name! You can do this inside your YAML file: ```yaml task: <name of the task> ``` Including a task name is mandatory. It is often also convenient to label your task with several `tag` values, though this field is optional: ```yaml tag: - tag1 - tag2 ``` This will add your task to the `tag1` and `tag2` tags, enabling people to know how to categorize your task, and if desired run all tasks in one of these groups at once, your task along with them. If your task is not in the `lm_eval/tasks` folder, you'll need to tell the Eval Harness where to look for YAML files. You can do this via the `--include_path` argument in `__main__.py`. This command will be used to initialize the `TaskManager` object which you can also use for your custom scripts. ```python task_manager = TaskManager(args.verbosity, include_path=args.include_path) ``` Passing `--tasks /path/to/yaml/file` is also accepted. ### Advanced Group Configs While `tag` values are helpful when you want to be able to quickly and conveniently run a set of related tasks via `--tasks my_tag_name`, often, we wish to implement more complex logic. For example, the MMLU benchmark contains 57 *subtasks* that must all be *averaged* together in order to report a final 'MMLU score'. Groupings of tasks might also use particular variants of a task--for example, we might want to default to evaluating a task as 5-shot when called as part of a given grouping, but not have a preference for number of shots when evaluating it as a standalone. We implement this via **groups**, which are distinct from tags. Groups can be implemented via *group config* YAML files, which are laid out similarly but slightly differently to tasks' YAML configs. The most basic form of group can be defined via a YAML config similar to the following: ```yaml group: nli_tasks task: - cb - anli_r1 - rte metadata: version: 1.0 ``` This will behave almost identically to a `tag` that includes these 3 tasks, but with one key distinction: we'll print the `nli_tasks` group as a row (with no associated metrics) in our table of outputs, and visually show that these 3 tasks appear under its subheader. Now, let's assume we actually want to report an aggregate score for `nli_tasks`. We would instead use a YAML config like the following: ```yaml group: nli_tasks task: - cb - anli_r1 - rte aggregate_metric_list: - metric: acc aggregation: mean weight_by_size: true # defaults to `true`. Set this to `false` to do a "macro" average (taking each subtask's average accuracy, and summing those accuracies and dividing by 3)--by default we do a "micro" average (retain all subtasks' per-document accuracies, and take the mean over all documents' accuracies to get our aggregate mean). metadata: version: 1.0 ``` Similar to our `metric_list` for listing out the metrics we want to calculate for a given task, we use an `aggregate_metric_list` field to specify which metric name to aggregate across subtasks, what aggregation function to use, and whether we should micro- or macro- average these metrics. See [./task_guide.md](./task_guide.md) for a full list of related sub-keys. **[!Tip]: currently, we predominantly only support the aggregation of group metrics that use `mean` (either micro- or macro- averaged) over their subtasks. If you require even more complex aggregation rules, you may want to perform aggregation offline.** Group configs can be fairly complex! We can do various operations, such as defining new subtask(s) inline in our group YAML, overriding an existing task's specific config value, or nesting existing groups within our For example, let's build a config for evaluating MMLU and a few natural language inference tasks. For MMLU, we can write the name for the benchmark as a subtask written under `task`. You can configure the parameters such as `num_fewshot`. If the task being configured is a group such as `mmlu` or `super_glue`, the parameter set will be applied to all of the subtasks. ```yaml group: nli_and_mmlu task: - group: nli_tasks task: - cb - anli_r1 - rte aggregate_metric_list: - metric: acc aggregation: mean higher_is_better: true - task: mmlu num_fewshot: 2 ``` ### Configuring python classes There can occasions when yaml-based tasks cannot accommodate how a task is handled. LM-Eval supports the manually implementing tasks as was previously done before `0.4.x`. To register the task, you can simply make a yaml with the name of the task in `task` and the class object in `class` using the `!function` prefix. ```yaml task: squadv2 class: !function task.SQuAD2 ``` This also applies to building group configurations with subtasks that are python classes. ```yaml group: scrolls task: - task: scrolls_qasper class: !function task.Qasper - task: scrolls_quality class: !function task.QuALITY - task: scrolls_narrativeqa class: !function task.NarrativeQA ... ``` You can also pass a custom argument to your class by accepting `config` in the custom class constructor. Here's how to do it: ```yaml task: 20_newsgroups class: !function task.Unitxt recipe: card=cards.20_newsgroups,template=templates.classification.multi_class.title ``` In this example, `recipe` is the custom argument for the `Unitxt` class. ## Beautifying Table Display To avoid conflict, each task needs to be registered with a unique name. Because of this, slight variations of task are still counted as unique tasks and need to be named uniquely. This could be done by appending an additional naming that may refer to the variation such as in MMLU where the template used to evaluated for flan are differentiated from the default by the prefix `mmlu_flan_*`. Printing the full task names can easily clutter the results table at the end of the evaluation especially when you have a long list of tasks or are using a benchmark that comprises of many tasks. To make it more legible, you can use `task_alias` and `group_alias` to provide an alternative task name and group name that will be printed. For example in `mmlu_abstract_algebra.yaml` we set `task_alias` to `abstract_algebra`. In group configs, a `group_alias` for a group can also be set. ``` "dataset_name": "abstract_algebra" "description": "The following are multiple choice questions (with answers) about abstract\ \ algebra.\n\n" "include": "_default_template_yaml" "task": "mmlu_abstract_algebra" "task_alias": "abstract_algebra" ``` ## Checking validity After registering your task, you can now check on your data downloading and verify that the few-shot samples look as intended. Run the following command with your desired args: ```bash python -m scripts.write_out \ --output_base_path <path> \ --tasks <your-task-name> \ --sets <train | val | test> \ --num_fewshot K \ --num_examples N \ ``` Open the file specified at the `--output_base_path <path>` and ensure it passes a simple eye test. ## Versioning One key feature in LM Evaluation Harness is the ability to version tasks and groups--that is, mark them with a specific version number that can be bumped whenever a breaking change is made. This version info can be provided by adding the following to your new task or group config file: ``` metadata: version: 0 ``` Now, whenever a change needs to be made to your task in the future, please increase the version number by 1 so that users can differentiate the different task iterations and versions. If you are incrementing a task's version, please also consider adding a changelog to the task's README.md noting the date, PR number, what version you have updated to, and a one-liner describing the change. for example, * \[Dec 25, 2023\] (PR #999) Version 0.0 -> 1.0: Fixed a bug with answer extraction that led to underestimated performance. ## Checking performance + equivalence It's now time to check models' performance on your task! In the evaluation harness, we intend to support a wide range of evaluation tasks and setups, but prioritize the inclusion of already-proven benchmarks following the precise evaluation setups in the literature where possible. To enable this, we provide a checklist that should be completed when contributing a new task, to enable accurate book-keeping and to ensure that tasks added to the library are well-tested and, where applicable, precedented. ### Task Validity Checklist The checklist is the following: For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant? It is recommended to include a filled-out copy of this checklist in the README.md for the subfolder you are creating, if you have created a new subfolder in `lm_eval/tasks`. **Finally, please add a short description of your task(s), along with a link to its subfolder in lm_eval/tasks , to [`lm_eval/tasks/README.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/README.md) so that users can discover your task in the library, and follow the link to your README for more information about the variants supported, their task names, and the original source of the dataset and/or evaluation setup.** ## Submitting your task You're all set! Now push your work and make a pull request to the `main` branch! Thanks for the contribution :). If there are any questions, please leave a message in the `#lm-thunderdome` channel on the EAI discord!
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/new_task_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/new_task_guide.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 25578 }
# Task Configuration The `lm-evaluation-harness` is meant to be an extensible and flexible framework within which many different evaluation tasks can be defined. All tasks in the new version of the harness are built around a YAML configuration file format. These YAML configuration files, along with the current codebase commit hash, are intended to be shareable such that providing the YAML config enables another researcher to precisely replicate the evaluation setup used by another, in the case that the prompt or setup differs from standard `lm-eval` task implementations. While adding a standard evaluation task on a new dataset can be occasionally as simple as swapping out a Hugging Face dataset path in an existing file, more specialized evaluation setups also exist. Here we'll provide a crash course on the more advanced logic implementable in YAML form available to users. If your intended task relies on features beyond what are described in this guide, we'd love to hear about it! Feel free to open an issue describing the scenario on Github, create a PR to the project with a proposed implementation, or ask in the `#lm-thunderdome` channel on the EleutherAI discord. ## Configurations Tasks are configured via the `TaskConfig` object. Below, we describe all fields usable within the object, and their role in defining a task. ### Parameters Task naming + registration: - **task** (`str`, defaults to None) — name of the task. - **task_alias** (`str`, defaults to None) - Alias of the task name that will be printed in the final table results. - **tag** (`str`, *optional*) — name of the task tags(s) a task belongs to. Enables one to run all tasks with a specified tag name at once. Dataset configuration options: - **dataset_path** (`str`) — The name of the dataset as listed by HF in the datasets Hub. - **dataset_name** (`str`, *optional*, defaults to None) — The name of what HF calls a “data instance” or sub-task of the benchmark. If your task does not contain any data instances, just leave this to default to None. (If you're familiar with the HF `datasets.load_dataset` function, these are just the first 2 arguments to it.) - **dataset_kwargs** (`dict`, *optional*) — Auxiliary arguments that `datasets.load_dataset` accepts. This can be used to specify arguments such as `data_files` or `data_dir` if you want to use local datafiles such as json or csv. - **training_split** (`str`, *optional*) — Split in the dataset to use as the training split. - **validation_split** (`str`, *optional*) — Split in the dataset to use as the validation split. - **test_split** (`str`, *optional*) — Split in the dataset to use as the test split. - **fewshot_split** (`str`, *optional*) — Split in the dataset to draw few-shot exemplars from. assert that this not None if num_fewshot > 0. - **process_docs** (`Callable`, *optional*) — Optionally define a function to apply to each HF dataset split, to preprocess all documents before being fed into prompt template rendering or other evaluation steps. Can be used to rename dataset columns, or to process documents into a format closer to the expected format expected by a prompt template. Prompting / in-context formatting options: - **use_prompt** (`str`, *optional*) — Name of prompt in promptsource to use. if defined, will overwrite doc_to_text, doc_to_target, and doc_to_choice. - **description** (`str`, *optional*) — An optional prepended Jinja2 template or string which will be prepended to the few-shot examples passed into the model, often describing the task or providing instructions to a model, such as `"The following are questions (with answers) about {{subject}}.\n\n"`. No delimiters or spacing are inserted between the description and the first few-shot example. - **doc_to_text** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate input for the model. - **doc_to_target** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate target output for the model. For multiple choice tasks, this should return an index into the answer choice list of the correct answer. - **doc_to_choice** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into a list of possible string choices for `multiple_choice` tasks. Left undefined for `generate_until` tasks. - **fewshot_delimiter** (`str`, *optional*, defaults to "\n\n") — String to insert between few-shot examples. - **target_delimiter** (`str`, *optional*, defaults to `" "`) — String to insert between input and target output for the datapoint being tested. Runtime configuration options: - **num_fewshot** (`int`, *optional*, defaults to 0) — Number of few-shot examples before the input. - **batch_size** (`int`, *optional*, defaults to 1) — Batch size. Scoring details: - **metric_list** (`str`, *optional*, defaults to None) — A list of metrics to use for evaluation. See docs for expected format. - **output_type** (`str`, *optional*, defaults to "generate_until") — Selects the type of model output for the given task. Options are `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`. - **generation_kwargs** (`dict`, *optional*) — Auxiliary arguments for the `generate` function from HF transformers library. Advanced keyword arguments may not be supported for non-HF LM classes. - **repeats** (`int`, *optional*, defaults to 1) — Number of repeated runs through model for each sample. can be used for cases such as self-consistency. - **filter_list** (`Union[str, list]`, *optional*) — List of filters to postprocess model outputs. See below for further detail on the filter API. - **should_decontaminate** (`bool`, *optional*, defaults to False) - Whether to decontaminate or not. - **doc_to_decontamination_query** (`str`, *optional*) — Query for decontamination if `should_decontaminate` is True. If `should_decontaminate` is True but `doc_to_decontamination_query` is `None`, `doc_to_decontamination_query` will follow `doc_to_text`. Other: - **metadata** (`dict`, *optional*) — An optional field where arbitrary metadata can be passed. Most tasks should include a `version` key in this field that is used to denote the version of the yaml config. Other special metadata keys are: `num_fewshot`, to override the printed `n-shot` table column for a task. ## Filters A key component of the `lm-evaluation-harness` library is the `Filter` object. In a typical evaluation run of the harness, we take the formatted inputs and run them through our LM, with the appropriate output type (greedy or free-form generation, or loglikelihood-based comparative scoring). After getting scores or output text from our LM on each `Instance` or document in the dataset, we then need to feed these responses into a metric or scoring function to return scores to a user. However, certain tasks may require more complex behavior than directly turning over model outputs to a metric function. For example, we may want to post-process our output text by truncating it or extracting a model's answer, we may want to ensemble over multiple "takes" on a different document, et cetera. **Detailed Aside**: We do such post-processing by operating on *responses*, which are stored after running an LM on an `Instance` from the task in `Instance.resps`. `resps` is a `List[str]` for each instance, and we pass a `List[List[<expected return type from model>]]` to our filters that is a list of `[instance.resps for instance in instances]`. Our filters, after completing a pipeline, must return a `List[<expected return type from model>]` which we then unpack and store each element of in `Instance.filtered_resps` for the corresponding instance. Thus, we take as input a list of returns from our model for each doc, and must return a return from our model *without it being wrapped in a list* for each doc. **End Aside** A full list of supported filter operations can be found in `lm_eval/filters/__init__.py`. Contributions of new filter types are welcome! ### Multiple Filter Pipelines Tasks need not be limited to a single filter pipeline. We enable users to run multiple, distinct, filter pipelines on *the same model outputs* generated in one run on a task. As a case study, let's look at an implementation of solving the Gsm8k math word problem benchmark in `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`. Here, we are emulating the setup used by [Self-Consistency Improves Chain of Thought Prompting](https://arxiv.org/abs/2203.11171), in which evaluation is performed by generating N chain-of-thought outputs from a model via temperature-based sampling, then selecting the answers output by the model at the end of the chains of thought, then majority voting across all those numeric answers. Within our YAML file: ```yaml ... repeats: 64 filter_list: - name: "score-first" filter: - function: "regex" regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)" - function: "take_first" - name: "maj@64" filter: - function: "regex" regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)" - function: "majority_vote" - function: "take_first" - name: "maj@8" filter: - function: "take_first_k" k: 8 - function: "regex" regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)" - function: "majority_vote" - function: "take_first" ``` We are able to provide multiple different filter pipelines, each with their own name and list of filters to apply in sequence. Our first filter pipeline implements - applying a regex to the model generations (extracting the number within the phrase "The answer is (number)") - selecting only the first out of the 64 model answers Then scoring this single answer. ```yaml - name: "score-first" filter: - function: "regex" regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)" - function: "take_first" ``` Our second filter pipeline, "maj@64", does majority voting across all 64 answers via: - applying the same regex to all responses, to get the numerical answer from the model for each of the 64 responses per problem - applying majority voting to all responses, which then returns a length-1 `[<majority answer>]` list for each - taking the first element of this length-1 list, to then score the sole response `<majority answer>` for each document. ```yaml - name: "maj@64" filter: - function: "regex" regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)" - function: "majority_vote" - function: "take_first" ``` Our final filter pipeline, "maj@8", does majority voting across the first 8 of the model's responses per document via: - subsetting the len-64 list of responses `[answer1, answer2, ..., answer64]` to `[answer1, answer2, ..., answer8]` for each document - performing the same sequence of filters on these new sets of 8 responses, for each document. ```yaml - name: "maj@8" filter: - function: "take_first_k" k: 8 - function: "regex" regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)" - function: "majority_vote" - function: "take_first" ``` Thus, given the 64 responses from our LM on each document, we can report metrics on these responses in these 3 different ways, as defined by our filter pipelines. ### Adding a custom filter Just like adding a custom model with `register_model` decorator one is able to do the same with filters, for example ```python from lm_eval.api.filter import Filter from lm_eval.api.registry import register_filter @register_filter("new_filter") class NewFilter(Filter) ... ``` ## Embedded Python Code Use can use python functions for certain arguments by using the `!function` operator after the argument name followed by `<filename>.<pythonfunctionname>`. This feature can be used for the following arguments: 1. `doc_to_text` 2. `doc_to_target` 3. `doc_to_choice` 4. `aggregation` for a `metric` in `metric_list` ## (No Longer Recommended) Direct `Task` Subclassing The prior implementation method of new tasks was to subclass `Task`. While we intend to migrate all tasks to the new YAML implementation option going forward, it remains possible to subclass the Task class and implement custom logic. For more information, see `docs/task_guide.md` in v0.3.0 of the `lm-evaluation-harness`. ## Including a Base YAML You can base a YAML on another YAML file as a template. This can be handy when you need to just change the prompt for `doc_to_text` but keep the rest the same or change `filters` to compare which is better. Simply use `include` in the YAML file and write the name of the template you want to base from. This assumes that the base temeplate is in the same directory. Otherwise, You will need to define the full path. ``` include: <YAML filename or with full path> ... ``` You can find an example of how to use this feature at [gsm8k-cot-self-consistency.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml) where it is based off [gsm8k-cot.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k-cot.yaml) ## Passing Arguments to Metrics Metrics can be defined in the `metric_list` argument when building the YAML config. Multiple metrics can be listed along with any auxiliary arguments. For example, setting the [`exact_match` metric](https://github.com/huggingface/evaluate/tree/main/metrics/exact_match), auxiliary arguments such as `ignore_case`, `ignore_punctuation`, `regexes_to_ignore` can be listed as well. They will be added to the metric function as `kwargs`. Some metrics have predefined values for `aggregation` and `higher_is_better` so listing the metric name only can be sufficient. ``` metric_list: - metric: acc - metric: exact_match aggregation: mean higher_is_better: true ignore_case: true ignore_punctuation: false regexes_to_ignore: - "," - "\\$" ``` ### Natively Supported Metrics Here we list all metrics currently supported natively in `lm-eval`: Metrics: * `acc` (accuracy) * `acc_norm` (length-normalized accuracy) * `acc_mutual_info` (baseline loglikelihood - normalized accuracy) * `perplexity` * `word_perplexity` (perplexity per word) * `byte_perplexity` (perplexity per byte) * `bits_per_byte` * `matthews_corrcoef` (Matthews correlation coefficient) * `f1` (F1 score) * `bleu` * `chrf` * `ter` Aggregation functions: * `mean` * `median` * `perplexity` * `weighted_perplexity` * `bits_per_byte` ### Adding a Multiple Choice Metric Adding a multiple choice metric has a few steps. To get it working you need to: 1. register a metric function 2. register an aggregation function 3. update the `Task` definition to make sure the correct arguments are passed The default metric and aggregation functions are in `lm_eval/api/metrics.py`, and you can add a function there if it's for general use. The metrics are towards the bottom of the file and look like this: @register_metric( metric="mcc", higher_is_better=True, output_type="multiple_choice", aggregation="matthews_corrcoef", ) def mcc_fn(items): # This is a passthrough function return items Note that many of these are passthrough functions, and for multiple choice (at least) this function is never actually called. Aggregation functions are defined towards the top of the file, here's an example: @register_aggregation("matthews_corrcoef") def matthews_corrcoef(items): unzipped_list = list(zip(*items)) golds = unzipped_list[0] preds = unzipped_list[1] return sklearn.metrics.matthews_corrcoef(golds, preds) This function returns a single numeric value. The input is defined in `Task.process_results` in `lm_eval/api/task.py`. There's a section that looks like this: result_dict = { **({"acc": acc} if "acc" in use_metric else {}), **({"f1": (gold, pred)} if "f1" in use_metric else {}), **({"mcc": (gold, pred)} if "mcc" in use_metric else {}), **({"acc_norm": acc_norm} if "acc_norm" in use_metric else {}), **({"exact_match": exact_match} if "exact_match" in use_metric else {}), } The value here determines the input to the aggregation function, though the name used matches the metric function. These metrics all have simple needs and just need the accuracy or gold and predicted values, but immediately below this there are examples of metrics with more complicated needs you can use as reference. ## Good Reference Tasks Contributing a new task can be daunting! Luckily, much of the work has often been done for you in a different, similarly evaluated task. Good examples of task implementations to study include: Multiple choice tasks: - SciQ (`lm_eval/tasks/sciq/sciq.yaml`) Corpus perplexity evaluations: - Wikitext (`lm_eval/tasks/wikitext/wikitext.yaml`) Generative tasks: - GSM8k (`lm_eval/tasks/gsm8k/gsm8k.yaml`) Tasks using complex filtering: - GSM8k with CoT (+ with Self-Consistency): (`lm_eval/tasks/gsm8k/gsm8k-cot.yaml` ; `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`) # Group Configuration When evaluating a language model, it's is not unusual to test across a number of tasks that may not be related to one another in order to assess a variety of capabilities. To this end, it may be combursome to have to list the set of tasks or add a new group name to each yaml of each individual task. To solve this, we can create a **group** yaml config. This is a config that contains the names of the tasks that should be included in a particular group. The config consists of two main keys: a `group` key which denotes the name of the group (as it would be called from the command line, e.g. `mmlu`) and a `task` key which is where we can list the tasks. The tasks listed in `task` are the task names that have been registered. A good example of a group yaml config can be found at [../lm_eval/tasks/mmlu/default/_mmlu.yaml]. See also the [New Task Guide](./new_task_guide.md) for a more in-depth and tutorial-esque explanation of how to write complex GroupConfigs. ## Configurations Groups are configured via the `GroupConfig` object. Below, we describe all fields usable within the object, and their role in defining a task. ### Parameters - **group** (`str`, defaults to `None`) — name of the group. Used to invoke it from the command line. - **group_alias** (`str`, defaults to `None`) - Alternative name for the group that will be printed in the table output. - **task** (`Union[str, list]`, defaults to `None`) - List of tasks that constitute the group. - **aggregate_metric_list** (`list`, defaults to `None`) - similar to `metric_list` in TaskConfigs, provide a list of configurations for metrics that should be aggregated across subtasks. Leaving empty will result in no aggregation being performed for this group. Keys for each list entry are: - `metric: str` - the name of the metric to aggregate over (all subtasks must report a metric holding this name.) - `aggregation: str` - what aggregation function to apply to aggregate these per-subtask metrics. **currently, only `mean` is supported.** - `weight_by_size: bool = True` whether to perform micro- averaging (`True`) or macro- (`False`) averaging of subtasks' accuracy scores when reporting the group's metric. MMLU, for example, averages over per-document accuracies (the *micro average*), resulting in the same accuracy as if one simply concatenated all 57 subjects into a single dataset and evaluated accuracy on that dataset. - `filter_list: Union[str, List[str]] = "none"` - what filter keys one should match on to aggregate results. For example, if trying to aggregate over the `exact_match` metric using `strict-match` filter for `bbh_cot_zeroshot`, then set this to be `filter_list: "strict-match"`. - **metadata** (`dict`, *optional*) - As with TaskConfigs, a field where extra config metadata can be passed. set the `num_fewshot` key within this to override the printed n_shot value in a results table for your group, for example.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/docs/task_guide.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/task_guide.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 20188 }
# Code Repo [**Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models**](https://arxiv.org/abs/2408.00724). ## Clone git clone --recurse-submodules [email protected]:thu-wyz/rebase.git This command will clone our repository with the [sglang](https://github.com/sgl-project/sglang) repository as a submodule. The sglang repository should be on the *reward-model* branch, which has been modified slightly by us to support our process reward model for efficient tree search. One can also use hf_score.py in the repo to score the steps of each solution. The benchmark datasets: [MATH](https://github.com/hendrycks/math), [GSM8K](https://github.com/openai/grade-school-math). ## Install In order to install SGLang and other dependencies: cd sglang pip install -e "python[all]" One can also install SGLang through its official repo, but it may not support our process reward model, hence could only be used for sampling. ## Finetune Our finetuning code for policy models and reward models is based on [gpt-accelera](https://github.com/Edward-Sun/gpt-accelera) You can check the code in the finetune directory, we also provide huggingface finetune code for policy model. You can find the models on huggingface: [Llemma-7b](https://huggingface.co/tkitsers/Llemma-metamath-7b), [Llemma-34b](https://huggingface.co/tkitsers/Llemma-metamath-34b), [Llemma reward model](https://huggingface.co/tkitsers/Llemma-reward-model). ## Launch Server You can use **tmux** to start the servers, or run them in the background by adding **&** at the end of the scripts. Make sure to set the correct paths on your device. bash ./scripts/run_policy.sh bash ./scripts/run_reward.sh ## Sampling Baseline bash ./scripts/sgl_baseline.sh bash ./scripts/hf_scores.sh ## REBASE Before starting the REBASE, set the hyperparameters in the YAML file. Then run: bash ./scripts/rebase.sh ## Evaluate ou can select various aggregation functions for the scores at each step, such as last, mean, prod, or min. Additionally, you can modify the script to select answer based on best-of-n or weighted majority voting. bash ./scripts/evaluate.sh ## Citation If you find our work helpful, please consider citing us: @misc{wu2024inferencescalinglawsempirical, title={Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models}, author={Yangzhen Wu and Zhiqing Sun and Shanda Li and Sean Welleck and Yiming Yang}, year={2024}, eprint={2408.00724}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2408.00724}, }
{ "source": "simplescaling/s1", "title": "eval/rebase/inference_scaling/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2704 }
<div align="center"> <img src="assets/logo.png" alt="logo" width="400"></img> </div> -------------------------------------------------------------------------------- | [**Blog**](https://lmsys.org/blog/2024-01-17-sglang/) | [**Paper**](https://arxiv.org/abs/2312.07104) | SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with LLMs faster and more controllable by co-designing the frontend language and the runtime system. The core features of SGLang include: - **A Flexible Front-End Language**: This allows for easy programming of LLM applications with multiple chained generation calls, advanced prompting techniques, control flow, multiple modalities, parallelism, and external interaction. - **A High-Performance Runtime with RadixAttention**: This feature significantly accelerates the execution of complex LLM programs by automatic KV cache reuse across multiple calls. It also supports other common techniques like continuous batching and tensor parallelism. ## News - [2024/02] 🔥 SGLang enables **3x faster JSON decoding** with compressed finite state machine ([blog](https://lmsys.org/blog/2024-02-05-compressed-fsm/)). - [2024/01] 🔥 SGLang powers the serving of the official **LLaVA v1.6** release demo ([usage](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#demo)). - [2024/01] SGLang provides up to **5x faster inference** with RadixAttention ([blog](https://lmsys.org/blog/2024-01-17-sglang/)). ## Contents - [Install](#install) - [Quick Start](#quick-start) - [Frontend: Structured Generation Language (SGLang)](#frontend-structured-generation-language-sglang) - [Backend: SGLang Runtime (SRT)](#backend-sglang-runtime-srt) - [Benchmark And Performance](#benchmark-and-performance) - [Roadmap](#roadmap) - [Citation And Acknowledgment](#citation-and-acknowledgment) ## Install ### Method 1: With pip ``` pip install "sglang[all]" ``` ### Method 2: From source ``` git clone [email protected]:sgl-project/sglang.git cd sglang pip install --upgrade pip pip install -e "python[all]" ``` ### Notes - If you are using older GPUs (NVIDIA V100, T4), please pick the correct triton compiler version to avoid some known bugs. - For NVIDIA T4, please use `pip install "triton>=2.2.0"`. - For NVIDIA V100, please install the [nightly](https://triton-lang.org/main/getting-started/installation.html) version. - If you only need to use the OpenAI backend, you can avoid installing other dependencies by using `pip install "sglang[openai]"` ## Quick Start The example below shows how to use sglang to answer a mulit-turn question. ### Using Local Models First, launch a server with ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 ``` Then, connect to the server and answer a multi-turn question. ```python from sglang import function, system, user, assistant, gen, set_default_backend, RuntimeEndpoint @function def multi_turn_question(s, question_1, question_2): s += system("You are a helpful assistant.") s += user(question_1) s += assistant(gen("answer_1", max_tokens=256)) s += user(question_2) s += assistant(gen("answer_2", max_tokens=256)) set_default_backend(RuntimeEndpoint("http://localhost:30000")) state = multi_turn_question.run( question_1="What is the capital of the United States?", question_2="List two local attractions.", ) for m in state.messages(): print(m["role"], ":", m["content"]) print(state["answer_1"]) ``` ### Using OpenAI Models Set the OpenAI API Key ``` export OPENAI_API_KEY=sk-****** ``` Then, answer a multi-turn question. ```python from sglang import function, system, user, assistant, gen, set_default_backend, OpenAI @function def multi_turn_question(s, question_1, question_2): s += system("You are a helpful assistant.") s += user(question_1) s += assistant(gen("answer_1", max_tokens=256)) s += user(question_2) s += assistant(gen("answer_2", max_tokens=256)) set_default_backend(OpenAI("gpt-3.5-turbo")) state = multi_turn_question.run( question_1="What is the capital of the United States?", question_2="List two local attractions.", ) for m in state.messages(): print(m["role"], ":", m["content"]) print(state["answer_1"]) ``` ### More Examples Anthropic and VertexAI (Gemini) models are also supported. You can find more examples at [examples/quick_start](examples/quick_start). ## Frontend: Structured Generation Language (SGLang) To begin with, import sglang. ```python import sglang as sgl ``` `sglang` provides some simple primitives such as `gen`, `select`, `fork`, `image`. You can implement your prompt flow in a function decorated by `sgl.function`. You can then invoke the function with `run` or `run_batch`. The system will manage the state, chat template, parallelism and batching for you. The complete code for the examples below can be found at [readme_examples.py](examples/usage/readme_examples.py) ### Control Flow You can use any Python code within the function body, including control flow, nested function calls, and external libraries. ```python @sgl.function def tool_use(s, question): s += "To answer this question: " + question + ". " s += "I need to use a " + sgl.gen("tool", choices=["calculator", "search engine"]) + ". " if s["tool"] == "calculator": s += "The math expression is" + sgl.gen("expression") elif s["tool"] == "search engine": s += "The key word to search is" + sgl.gen("word") ``` ### Parallelism Use `fork` to launch parallel prompts. Because `sgl.gen` is non-blocking, the for loop below issues two generation calls in parallel. ```python @sgl.function def tip_suggestion(s): s += ( "Here are two tips for staying healthy: " "1. Balanced Diet. 2. Regular Exercise.\n\n" ) forks = s.fork(2) for i, f in enumerate(forks): f += f"Now, expand tip {i+1} into a paragraph:\n" f += sgl.gen(f"detailed_tip", max_tokens=256, stop="\n\n") s += "Tip 1:" + forks[0]["detailed_tip"] + "\n" s += "Tip 2:" + forks[1]["detailed_tip"] + "\n" s += "In summary" + sgl.gen("summary") ``` ### Multi Modality Use `sgl.image` to pass an image as input. ```python @sgl.function def image_qa(s, image_file, question): s += sgl.user(sgl.image(image_file) + question) s += sgl.assistant(sgl.gen("answer", max_tokens=256) ``` See also [srt_example_llava.py](examples/quick_start/srt_example_llava.py). ### Constrained Decoding Use `regex` to specify a regular expression as a decoding constraint. This is only supported for local models. ```python @sgl.function def regular_expression_gen(s): s += "Q: What is the IP address of the Google DNS servers?\n" s += "A: " + sgl.gen( "answer", temperature=0, regex=r"((25[0-5]|2[0-4]\d|[01]?\d\d?).){3}(25[0-5]|2[0-4]\d|[01]?\d\d?)", ) ``` ### JSON Decoding Use `regex` to specify a JSON schema with a regular expression. ```python character_regex = ( r"""\{\n""" + r""" "name": "[\w\d\s]{1,16}",\n""" + r""" "house": "(Gryffindor|Slytherin|Ravenclaw|Hufflepuff)",\n""" + r""" "blood status": "(Pure-blood|Half-blood|Muggle-born)",\n""" + r""" "occupation": "(student|teacher|auror|ministry of magic|death eater|order of the phoenix)",\n""" + r""" "wand": \{\n""" + r""" "wood": "[\w\d\s]{1,16}",\n""" + r""" "core": "[\w\d\s]{1,16}",\n""" + r""" "length": [0-9]{1,2}\.[0-9]{0,2}\n""" + r""" \},\n""" + r""" "alive": "(Alive|Deceased)",\n""" + r""" "patronus": "[\w\d\s]{1,16}",\n""" + r""" "bogart": "[\w\d\s]{1,16}"\n""" + r"""\}""" ) @sgl.function def character_gen(s, name): s += name + " is a character in Harry Potter. Please fill in the following information about this character.\n" s += sgl.gen("json_output", max_tokens=256, regex=character_regex) ``` See also [json_decode.py](examples/usage/json_decode.py) for an additional example on specifying formats with Pydantic models. ### Batching Use `run_batch` to run a batch of requests with continuous batching. ```python @sgl.function def text_qa(s, question): s += "Q: " + question + "\n" s += "A:" + sgl.gen("answer", stop="\n") states = text_qa.run_batch( [ {"question": "What is the capital of the United Kingdom?"}, {"question": "What is the capital of France?"}, {"question": "What is the capital of Japan?"}, ], progress_bar=True ) ``` ### Streaming Add `stream=True` to enable streaming. ```python @sgl.function def text_qa(s, question): s += "Q: " + question + "\n" s += "A:" + sgl.gen("answer", stop="\n") state = text_qa.run( question="What is the capital of France?", temperature=0.1, stream=True ) for out in state.text_iter(): print(out, end="", flush=True) ``` ### Tips and Implementation Details - The `choices` argument in `sgl.gen` is implemented by computing the normalized log probabilities of all choices and selecting the one with the highest probability. - The `regex` argument in `sgl.gen` is implemented through autoregressive decoding with logit bias masking, according to the constraints set by the regex. ## Backend: SGLang Runtime (SRT) The SGLang Runtime (SRT) is designed to work best with the SGLang frontend. However, it can also be used as a standalone API server. In this case, the [RadixAttention](https://arxiv.org/abs/2312.07104) can still greatly accelerate many use cases with automatic KV cache reuse. ### Usage Launch a server ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 ``` Send a request ``` curl http://localhost:30000/generate \ -H "Content-Type: application/json" \ -d '{ "text": "Once upon a time,", "sampling_params": { "max_new_tokens": 16, "temperature": 0 } }' ``` Learn more about the argument format [here](docs/sampling_params.md). ### OpenAI Compatible API In addition, the server supports an experimental OpenAI-compatible API. ```python import openai client = openai.Client( base_url="http://127.0.0.1:30000/v1", api_key="EMPTY") # Text completion response = client.completions.create( model="default", prompt="The capital of France is", temperature=0, max_tokens=32, ) print(response) # Chat completion response = client.chat.completions.create( model="default", messages=[ {"role": "system", "content": "You are a helpful AI assistant"}, {"role": "user", "content": "List 3 countries and their capitals."}, ], temperature=0, max_tokens=64, ) print(response) ``` In above example, the server uses the chat template specified in the model tokenizer. You can override the chat template if needed when launching the server: ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template llama-2 ``` If the chat template you are looking for is missing, you are welcome to contribute it. Meanwhile, you can also temporary register your chat template as follows: ```json { "name": "my_model", "system": "<|im_start|>system", "user": "<|im_start|>user", "assistant": "<|im_start|>assistant", "sep_style": "CHATML", "sep": "<|im_end|>", "stop_str": ["<|im_end|>", "<|im_start|>"] } ``` ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --chat-template ./my_model_template.json ``` ### Additional Arguments - Add `--tp 2` to enable tensor parallelism. ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --tp 2 ``` - If you see out-of-memory errors during serving, please try to reduce the memory usage of the KV cache pool by setting a smaller value of `--mem-fraction-static`. The default value is `0.9` ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --mem-fraction-static 0.7 ``` - You can turn on [flashinfer](docs/flashinfer.md) to acclerate the inference by using highly optimized CUDA kernels. ### Supported Models - Llama - Mistral - Mixtral - Qwen / Qwen 2 - Gemma - Please add a new flag `--attention-reduce-in-fp32` to avoid some precision errors. - `python -m sglang.launch_server --model-path google/gemma-7b-it --port 30000 --attention-reduce-in-fp32` - LLaVA - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000` - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-vicuna-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --chat-template vicuna_v1.1 --port 30000` - `python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.6-34b --tokenizer-path liuhaotian/llava-v1.6-34b-tokenizer --port 3000` - Yi-VL - see [srt_example_yi_vl.py](examples/quick_start/srt_example_yi_vl.py). - AWQ/GPTQ quantization ## Benchmark And Performance - Llama-7B on NVIDIA A10G, FP16, Tensor Parallelism=1 ![llama_7b](assets/llama_7b.jpg) - Mixtral-8x7B on NVIDIA A10G, FP16, Tensor Parallelism=8 ![mixtral_8x7b](assets/mixtral_8x7b.jpg) Learn more [here](docs/benchmark_results.md). ## Roadmap https://github.com/sgl-project/sglang/issues/157 ## Citation And Acknowledgment ``` @misc{zheng2023efficiently, title={Efficiently Programming Large Language Models using SGLang}, author={Lianmin Zheng and Liangsheng Yin and Zhiqiang Xie and Jeff Huang and Chuyue Sun and Cody Hao Yu and Shiyi Cao and Christos Kozyrakis and Ion Stoica and Joseph E. Gonzalez and Clark Barrett and Ying Sheng}, year={2023}, eprint={2312.07104}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` [![Paper page](https://huggingface.co/datasets/huggingface/badges/resolve/main/paper-page-md.svg)](https://huggingface.co/papers/2312.07104) We learned from the design and reused some code of the following projects: [Guidance](https://github.com/guidance-ai/guidance), [vLLM](https://github.com/vllm-project/vllm), [LightLLM](https://github.com/ModelTC/lightllm), [FlashInfer](https://github.com/flashinfer-ai/flashinfer), [Outlines](https://github.com/outlines-dev/outlines), [LMQL](https://github.com/eth-sri/lmql).
{ "source": "simplescaling/s1", "title": "eval/rebase/sglang/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 14251 }
# Tasks A list of supported tasks and task groupings can be viewed with `lm-eval --tasks list`. For more information, including a full list of task names and their precise meanings or sources, follow the links provided to the individual README.md files for each subfolder. | Task Family | Description | Language(s) | |-------------|-------------|-------------| | [aclue](aclue/README.md) | Tasks focusing on ancient Chinese language understanding and cultural aspects. | Ancient Chinese | | [aexams](aexams/README.md) | Tasks in Arabic related to various academic exams covering a range of subjects. | Arabic | | [agieval](agieval/README.md) | Tasks involving historical data or questions related to history and historical texts. | English, Chinese | | [anli](anli/README.md) | Adversarial natural language inference tasks designed to test model robustness. | English | | [arabic_leaderboard_complete](arabic_leaderboard_complete/README.md) | A full version of the tasks in the Open Arabic LLM Leaderboard, focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) | | [arabic_leaderboard_light](arabic_leaderboard_light/README.md) | A light version of the tasks in the Open Arabic LLM Leaderboard (i.e., 10% samples of the test set in the original benchmarks), focusing on the evaluation of models that reflect the characteristics of Arabic language understanding and comprehension, culture, and heritage. Note that some of these tasks are machine-translated. | Arabic (Some MT) | | [arabicmmlu](arabicmmlu/README.md) | Localized Arabic version of MMLU with multiple-choice questions from 40 subjects. | Arabic | | [arc](arc/README.md) | Tasks involving complex reasoning over a diverse set of questions. | English | | [arithmetic](arithmetic/README.md) | Tasks involving numerical computations and arithmetic reasoning. | English | | [asdiv](asdiv/README.md) | Tasks involving arithmetic and mathematical reasoning challenges. | English | | [babi](babi/README.md) | Tasks designed as question and answering challenges based on simulated stories. | English | | [basque_bench](basque_bench/README.md) | Collection of tasks in Basque encompassing various evaluation areas. | Basque | | [basqueglue](basqueglue/README.md) | Tasks designed to evaluate language understanding in Basque language. | Basque | | [bbh](bbh/README.md) | Tasks focused on deep semantic understanding through hypothesization and reasoning. | English, German | | [belebele](belebele/README.md) | Language understanding tasks in a variety of languages and scripts. | Multiple (122 languages) | | benchmarks | General benchmarking tasks that test a wide range of language understanding capabilities. | | | [bertaqa](bertaqa/README.md) | Local Basque cultural trivia QA tests in English and Basque languages. | English, Basque, Basque (MT) | | [bigbench](bigbench/README.md) | Broad tasks from the BIG-bench benchmark designed to push the boundaries of large models. | Multiple | | [blimp](blimp/README.md) | Tasks testing grammatical phenomena to evaluate language model's linguistic capabilities. | English | | [catalan_bench](catalan_bench/README.md) | Collection of tasks in Catalan encompassing various evaluation areas. | Catalan | | [ceval](ceval/README.md) | Tasks that evaluate language understanding and reasoning in an educational context. | Chinese | | [cmmlu](cmmlu/README.md) | Multi-subject multiple choice question tasks for comprehensive academic assessment. | Chinese | | code_x_glue | Tasks that involve understanding and generating code across multiple programming languages. | Go, Java, JS, PHP, Python, Ruby | | [commonsense_qa](commonsense_qa/README.md) | CommonsenseQA, a multiple-choice QA dataset for measuring commonsense knowledge. | English | | [copal_id](copal_id/README.md) | Indonesian causal commonsense reasoning dataset that captures local nuances. | Indonesian | | [coqa](coqa/README.md) | Conversational question answering tasks to test dialog understanding. | English | | [crows_pairs](crows_pairs/README.md) | Tasks designed to test model biases in various sociodemographic groups. | English, French | | csatqa | Tasks related to SAT and other standardized testing questions for academic assessment. | Korean | | [drop](drop/README.md) | Tasks requiring numerical reasoning, reading comprehension, and question answering. | English | | [eq_bench](eq_bench/README.md) | Tasks focused on equality and ethics in question answering and decision-making. | English | | [eus_exams](eus_exams/README.md) | Tasks based on various professional and academic exams in the Basque language. | Basque | | [eus_proficiency](eus_proficiency/README.md) | Tasks designed to test proficiency in the Basque language across various topics. | Basque | | [eus_reading](eus_reading/README.md) | Reading comprehension tasks specifically designed for the Basque language. | Basque | | [eus_trivia](eus_trivia/README.md) | Trivia and knowledge testing tasks in the Basque language. | Basque | | [fda](fda/README.md) | Tasks for extracting key-value pairs from FDA documents to test information extraction. | English | | [fld](fld/README.md) | Tasks involving free-form and directed dialogue understanding. | English | | [french_bench](french_bench/README.md) | Set of tasks designed to assess language model performance in French. | French| | [galician_bench](galician_bench/README.md) | Collection of tasks in Galician encompassing various evaluation areas. | Galician | | [glue](glue/README.md) | General Language Understanding Evaluation benchmark to test broad language abilities. | English | | [gpqa](gpqa/README.md) | Tasks designed for general public question answering and knowledge verification. | English | | [gsm8k](gsm8k/README.md) | A benchmark of grade school math problems aimed at evaluating reasoning capabilities. | English | | [haerae](haerae/README.md) | Tasks focused on assessing detailed factual and historical knowledge. | Korean | | [headqa](headqa/README.md) | A high-level education-based question answering dataset to test specialized knowledge. | Spanish, English | | [hellaswag](hellaswag/README.md) | Tasks to predict the ending of stories or scenarios, testing comprehension and creativity. | English | | [hendrycks_ethics](hendrycks_ethics/README.md) | Tasks designed to evaluate the ethical reasoning capabilities of models. | English | | [hendrycks_math](hendrycks_math/README.md) | Mathematical problem-solving tasks to test numerical reasoning and problem-solving. | English | | [ifeval](ifeval/README.md) | Interactive fiction evaluation tasks for narrative understanding and reasoning. | English | | [inverse_scaling](inverse_scaling/README.md) | Multiple-choice tasks from the Inverse Scaling Prize, designed to find settings where larger language models perform worse. | English | | [kmmlu](kmmlu/README.md) | Knowledge-based multi-subject multiple choice questions for academic evaluation. | Korean | | [kobest](kobest/README.md) | A collection of tasks designed to evaluate understanding in Korean language. | Korean | | [kormedmcqa](kormedmcqa/README.md) | Medical question answering tasks in Korean to test specialized domain knowledge. | Korean | | [lambada](lambada/README.md) | Tasks designed to predict the endings of text passages, testing language prediction skills. | English | | [lambada_cloze](lambada_cloze/README.md) | Cloze-style LAMBADA dataset. | English | | [lambada_multilingual](lambada_multilingual/README.md) | Multilingual LAMBADA dataset. This is a legacy version of the multilingual dataset, and users should instead use `lambada_multilingual_stablelm`. | German, English, Spanish, French, Italian | | [lambada_multilingual_stablelm](lambada_multilingual_stablelm/README.md) | Multilingual LAMBADA dataset. Users should prefer evaluating on this version of the multilingual dataset instead of on `lambada_multilingual`. | German, English, Spanish, French, Italian, Dutch, Portuguese | | [leaderboard](leaderboard/README.md) | Task group used by Hugging Face's [Open LLM Leaderboard v2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard). Those tasks are static and will not change through time | English | | [lingoly](lingoly/README.md) | Challenging logical reasoning benchmark in low-resource languages with controls for memorization | English, Multilingual | | [logiqa](logiqa/README.md) | Logical reasoning tasks requiring advanced inference and deduction. | English, Chinese | | [logiqa2](logiqa2/README.md) | Large-scale logical reasoning dataset adapted from the Chinese Civil Service Examination. | English, Chinese | | [mathqa](mathqa/README.md) | Question answering tasks involving mathematical reasoning and problem-solving. | English | | [mc_taco](mc_taco/README.md) | Question-answer pairs that require temporal commonsense comprehension. | English | | [med_concepts_qa](med_concepts_qa/README.md) | Benchmark for evaluating LLMs on their abilities to interpret medical codes and distinguish between medical concept. | English | | medmcqa | Medical multiple choice questions assessing detailed medical knowledge. | English | | medqa | Multiple choice question answering based on the United States Medical License Exams. | | | [mgsm](mgsm/README.md) | Benchmark of multilingual grade-school math problems. | Spanish, French, German, Russian, Chinese, Japanese, Thai, Swahili, Bengali, Telugu | | [minerva_math](minerva_math/README.md) | Mathematics-focused tasks requiring numerical reasoning and problem-solving skills. | English | | mmlu | Massive Multitask Language Understanding benchmark for broad domain language evaluation. Several variants are supported. | English | | [mmlusr](mmlusr/README.md) | Variation of MMLU designed to be more rigorous. | English | | model_written_evals | Evaluation tasks auto-generated for evaluating a collection of AI Safety concerns. | | | [mutual](mutual/README.md) | A retrieval-based dataset for multi-turn dialogue reasoning. | English | | [nq_open](nq_open/README.md) | Open domain question answering tasks based on the Natural Questions dataset. | English | | [okapi/arc_multilingual](okapi/arc_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (31 languages) **Machine Translated.** | | [okapi/hellaswag_multilingual](okapi/hellaswag_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (30 languages) **Machine Translated.** | | okapi/mmlu_multilingual | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (34 languages) **Machine Translated.** | | [okapi/truthfulqa_multilingual](okapi/truthfulqa_multilingual/README.md) | Tasks that involve reading comprehension and information retrieval challenges. | Multiple (31 languages) **Machine Translated.** | | [openbookqa](openbookqa/README.md) | Open-book question answering tasks that require external knowledge and reasoning. | English | | [paloma](paloma/README.md) | Paloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. | English | | [paws-x](paws-x/README.md) | Paraphrase Adversaries from Word Scrambling, focusing on cross-lingual capabilities. | English, French, Spanish, German, Chinese, Japanese, Korean | | [pile](pile/README.md) | Open source language modelling data set that consists of 22 smaller, high-quality datasets. | English | | [pile_10k](pile_10k/README.md) | The first 10K elements of The Pile, useful for debugging models trained on it. | English | | [piqa](piqa/README.md) | Physical Interaction Question Answering tasks to test physical commonsense reasoning. | English | | [polemo2](polemo2/README.md) | Sentiment analysis and emotion detection tasks based on Polish language data. | Polish | | [portuguese_bench](portuguese_bench/README.md) | Collection of tasks in European Portuguese encompassing various evaluation areas. | Portuguese | | [prost](prost/README.md) | Tasks requiring understanding of professional standards and ethics in various domains. | English | | [pubmedqa](pubmedqa/README.md) | Question answering tasks based on PubMed research articles for biomedical understanding. | English | | [qa4mre](qa4mre/README.md) | Question Answering for Machine Reading Evaluation, assessing comprehension and reasoning. | English | | [qasper](qasper/README.md) | Question Answering dataset based on academic papers, testing in-depth scientific knowledge. | English | | [race](race/README.md) | Reading comprehension assessment tasks based on English exams in China. | English | | realtoxicityprompts | Tasks to evaluate language models for generating text with potential toxicity. | | | [sciq](sciq/README.md) | Science Question Answering tasks to assess understanding of scientific concepts. | English | | [scrolls](scrolls/README.md) | Tasks that involve long-form reading comprehension across various domains. | English | | [siqa](siqa/README.md) | Social Interaction Question Answering to evaluate common sense and social reasoning. | English | | [spanish_bench](spanish_bench/README.md) | Collection of tasks in Spanish encompassing various evaluation areas. | Spanish | | [squad_completion](squad_completion/README.md) | A variant of the SQuAD question answering task designed for zero-shot evaluation of small LMs. | English | | [squadv2](squadv2/README.md) | Stanford Question Answering Dataset version 2, a reading comprehension benchmark. | English | | [storycloze](storycloze/README.md) | Tasks to predict story endings, focusing on narrative logic and coherence. | English | | [super_glue](super_glue/README.md) | A suite of challenging tasks designed to test a range of language understanding skills. | English | | [swag](swag/README.md) | Situations With Adversarial Generations, predicting the next event in videos. | English | | [swde](swde/README.md) | Information extraction tasks from semi-structured web pages. | English | | [tinyBenchmarks](tinyBenchmarks/README.md) | Evaluation of large language models with fewer examples using tiny versions of popular benchmarks. | English | | [tmmluplus](tmmluplus/README.md) | An extended set of tasks under the TMMLU framework for broader academic assessments. | Traditional Chinese | | [toxigen](toxigen/README.md) | Tasks designed to evaluate language models on their propensity to generate toxic content. | English | | [translation](translation/README.md) | Tasks focused on evaluating the language translation capabilities of models. | Arabic, English, Spanish, Basque, Hindi, Indonesian, Burmese, Russian, Swahili, Telugu, Chinese | | [triviaqa](triviaqa/README.md) | A large-scale dataset for trivia question answering to test general knowledge. | English | | [truthfulqa](truthfulqa/README.md) | A QA task aimed at evaluating the truthfulness and factual accuracy of model responses. | English | | [turkishmmlu](turkishmmlu/README.md) | A multiple-choice QA test modeled after MMLU, written in Turkish based on Turkish high-school level exams. | Turkish | | [unitxt](unitxt/README.md) | A number of tasks implemented using the unitxt library for flexible, shareable, and reusable data preparation and evaluation for generative AI. | English | | [unscramble](unscramble/README.md) | Tasks involving the rearrangement of scrambled sentences to test syntactic understanding. | English | | [webqs](webqs/README.md) | Web-based question answering tasks designed to evaluate internet search and retrieval. | English | | [wikitext](wikitext/README.md) | Tasks based on text from Wikipedia articles to assess language modeling and generation. | English | | [winogrande](winogrande/README.md) | A large-scale dataset for coreference resolution, inspired by the Winograd Schema Challenge. | English | | [wmdp](wmdp/README.md) | A benchmark with the objective of minimizing performance, based on potentially-sensitive multiple-choice knowledge questions. | English | | [wmt2016](wmt2016/README.md) | Tasks from the WMT 2016 shared task, focusing on translation between multiple languages. | English, Czech, German, Finnish, Russian, Romanian, Turkish | | [wsc273](wsc273/README.md) | The Winograd Schema Challenge, a test of commonsense reasoning and coreference resolution. | English | | [xcopa](xcopa/README.md) | Cross-lingual Choice of Plausible Alternatives, testing reasoning in multiple languages. | Estonian, Haitian, Indonesian, Italian, Quechua, Swahili, Tamil, Thai, Turkish, Vietnamese, Chinese | | [xnli](xnli/README.md) | Cross-Lingual Natural Language Inference to test understanding across different languages. | Arabic, Bulgarian, German, Greek, English, Spanish, French, Hindi, Russian, Swahili, Thai, Turkish, Urdu, Vietnamese, Chinese | | [xnli_eu](xnli_eu/README.md) | Cross-lingual Natural Language Inference tasks in Basque. | Basque | | [xstorycloze](xstorycloze/README.md) | Cross-lingual narrative understanding tasks to predict story endings in multiple languages. | Russian, Simplified Chinese, Spanish, Arabic, Hindi, Indonesian, Telugu, Swahili, Basque, Burmese | | [xwinograd](xwinograd/README.md) | Cross-lingual Winograd schema tasks for coreference resolution in multiple languages. | English, French, Japanese, Portuguese, Russian, Chinese |
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 17511 }
janitor.py contains a script to remove benchmark data contamination from training data sets. It uses the approach described in the [GPT-3 paper](https://arxiv.org/abs/2005.14165). ## Algorithm 1) Collects all contamination text files that are to be removed from training data 2) Filters training data by finding `N`gram matches between the training data and any contamination 1) `N`grams ignore case and punctuation and are split on whitespace. 2) Matching `N`gram substrings are removed, as is a `window_to_remove` character window around the match, splitting the training data into chunks 3) Any chunks less than `minimum_slice_length` are removed 4) Training data sets split into more than `too_dirty_cutoff` are considered completely contaminated and removed OpenAI used: ``` ngram_n = 13 window_to_remove = 200 minimum_slice_length = 200 too_dirty_cutoff = 10 ``` ## Compiling Janitor can be used as a pure python program, but it is much faster if the ngram code is run in C++. To compile the C++ code, run ``` pip install pybind11 c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) janitor_util.cpp -o janitor_util$(python3-config --extension-suffix) ``` MacOS users: If your compiler isn't linked to Python, you may need to add to the above `-undefined dynamic_lookup`. \ Linux users: If your compiler isn't linked to Python, you may need to follow these steps: 1. Rename the compiled code file to `janitor_util.so`. 2. Before running `import Janitor` in your code, add `sys.path.append("your/relative/path/to/janitor_util.so")` so that Python knows the location of `janitor_util.so`.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/scripts/clean_training_data/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/scripts/clean_training_data/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1642 }
# Task-name ### Paper Title: `paper titles goes here` Abstract: `link to paper PDF or arXiv abstract goes here` `Short description of paper / benchmark goes here:` Homepage: `homepage to the benchmark's website goes here, if applicable` ### Citation ``` BibTeX-formatted citation goes here ``` ### Groups, Tags, and Tasks #### Groups * `group_name`: `Short description` #### Tags * `tag_name`: `Short description` #### Tasks * `task_name`: `1-sentence description of what this particular task does` * `task_name2`: ... ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/templates/new_yaml_task/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/templates/new_yaml_task/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1213 }
# Finetune ## gpt-accelera Using gpt-accelera, first download and convert hf model to checkpoints: bash ./scripts_finetune/prepare*.sh Then finetune the reward model or policy model: bash ./scripts_finetune/finetune_rm.sh bash ./scripts_finetune/finetune_sft.sh Finally, convert back to hf model: bash ./scripts_finetune/convert.sh ## huggingface Using huggingface implementation, edit deepspeed_config.json, then run bash ./hf_finetune.sh
{ "source": "simplescaling/s1", "title": "eval/rebase/inference_scaling/finetune/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 466 }
## Benchmark Results We tested our system on the following common LLM workloads and reported the achieved throughput: - **[MMLU](https://arxiv.org/abs/2009.03300)**: A 5-shot, multi-choice, multi-task benchmark. - **[HellaSwag](https://arxiv.org/abs/1905.07830)**: A 20-shot, multi-choice sentence completion benchmark. - **[ReAct Agent](https://arxiv.org/abs/2210.03629)**: An agent task using prompt traces collected from the original ReAct paper. - **[Tree-of-Thought](https://arxiv.org/pdf/2305.10601.pdf)**: A custom tree search-based prompt for solving GSM-8K problems. - **JSON Decode**: Extracting information from a Wikipedia page and outputting it in JSON format. - **Chat (short)**: A synthetic chat benchmark where each conversation includes 4 turns with short LLM outputs. - **Chat (long)**: A synthetic chat benchmark where each conversation includes 4 turns with long LLM outputs. - **[DSPy RAG](https://github.com/stanfordnlp/dspy)**: A retrieval-augmented generation pipeline in the DSPy tutorial. - **[LLaVA Bench](https://github.com/haotian-liu/LLaVA)**: Running LLaVA v1.5, a vision language model on the LLaVA-in-the-wild benchmark. We tested both Llama-7B on one NVIDIA A10G GPU (24GB) and Mixtral-8x7B on 8 NVIDIA A10G GPUs with tensor parallelism, using FP16 precision. We used vllm v0.2.5, guidance v0.1.8, Hugging Face TGI v1.3.0, and SGLang v0.1.5. - Llama-7B on NVIDIA A10G, FP16, Tensor Parallelism=1 ![llama_7b](../assets/llama_7b.jpg) - Mixtral-8x7B on NVIDIA A10G, FP16, Tensor Parallelism=8 ![mixtral_8x7b](../assets/mixtral_8x7b.jpg) The benchmark code is available [here](https://github.com/sgl-project/sglang/tree/main/benchmark).
{ "source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/benchmark_results.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/benchmark_results.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1671 }
## Flashinfer Mode [flashinfer](https://github.com/flashinfer-ai/flashinfer) is a kernel library for LLM serving. It can be used in SGLang runtime to accelerate attention computation. ### Install flashinfer See https://docs.flashinfer.ai/installation.html. ### Run a Server With Flashinfer Mode Add `--enable-flashinfer` argument to enable flashinfer when launching a server. Example: ```bash python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 --enable-flashinfer ```
{ "source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/flashinfer.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/flashinfer.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 510 }
## How to Support a New Model To support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models). You can learn from existing model implementations and create new files for the new models. Most models are based on the transformer architecture, making them very similar. Another valuable resource is the vLLM model implementations. vLLM has extensive coverage of models, and SGLang has reused vLLM for most parts of the model implementations. This similarity makes it easy to port many models from vLLM to SGLang. 1. Compare these two files [SGLang LLaMA Implementation](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/llama2.py) and [vLLM LLaMA Implementation](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama.py). This comparison will help you understand how to convert a model implementation from vLLM to SGLang. The major difference is the replacement of PagedAttention with RadixAttention. The other parts are almost identical. 2. Convert models from vLLM to SGLang by visiting the [vLLM Models Directory](https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/models).
{ "source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/model_support.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/model_support.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1253 }
## Sampling Parameters of SGLang Runtime This doc describes the sampling parameters of the SGLang Runtime. The `/generate` endpoint accepts the following arguments in the JSON format. ```python @dataclass class GenerateReqInput: # The input prompt text: Union[List[str], str] # The image input image_data: Optional[Union[List[str], str]] = None # The sampling_params sampling_params: Union[List[Dict], Dict] = None # The request id rid: Optional[Union[List[str], str]] = None # Whether return logprobs of the prompts return_logprob: Optional[Union[List[bool], bool]] = None # The start location of the prompt for return_logprob logprob_start_len: Optional[Union[List[int], int]] = None # Whether to stream output stream: bool = False ``` The `sampling_params` follows this format ```python class SamplingParams: def __init__( self, max_new_tokens: int = 16, stop: Optional[Union[str, List[str]]] = None, temperature: float = 1.0, top_p: float = 1.0, top_k: int = -1, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, ignore_eos: bool = False, skip_special_tokens: bool = True, dtype: Optional[str] = None, regex: Optional[str] = None, ) -> None: ``` ## Examples ### Normal ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 ``` ```python import requests response = requests.post( "http://localhost:30000/generate", json={ "text": "The capital of France is", "sampling_params": { "temperature": 0, "max_new_tokens": 32, }, }, ) print(response.json()) ``` ### Streaming ```python import requests, json response = requests.post( "http://localhost:30000/generate", json={ "text": "The capital of France is", "sampling_params": { "temperature": 0, "max_new_tokens": 256, }, "stream": True, }, stream=True, ) prev = 0 for chunk in response.iter_lines(decode_unicode=False): chunk = chunk.decode("utf-8") if chunk and chunk.startswith("data:"): if chunk == "data: [DONE]": break data = json.loads(chunk[5:].strip("\n")) output = data["text"].strip() print(output[prev:], end="", flush=True) prev = len(output) print("") ``` ### Multi modal See [test_httpserver_llava.py](../test/srt/test_httpserver_llava.py).
{ "source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/sampling_params.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/sampling_params.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2521 }
## SRT Unit Tests ### Low-level API ``` cd sglang/test/srt/model python3 test_llama_low_api.py python3 test_llama_extend.py python3 test_llava_low_api.py python3 bench_llama_low_api.py ``` ### High-level API ``` python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 ``` ``` cd test/lang python3 test_srt_backend.py ``` ### Performance #### MMLU ``` cd benchmark/mmlu ``` Follow README.md to download the data. ``` python3 bench_sglang.py --nsub 3 # Expected performance on A10G # Total latency: 8.200 # Average accuracy: 0.413 ``` #### GSM-8K ``` cd benchmark/gsm8k ``` Follow README.md to download the data. ``` python3 bench_sglang.py --num-q 200 # Expected performance on A10G # Latency: 32.103 # Accuracy: 0.250 ``` #### More Please also test `benchmark/hellaswag`, `benchmark/latency_throughput`. ### More Models #### LLaVA ``` python3 -m sglang.launch_server --model-path liuhaotian/llava-v1.5-7b --tokenizer-path llava-hf/llava-1.5-7b-hf --port 30000 ``` ``` cd benchmark/llava_bench python3 bench_sglang.py # Expected performance on A10G # Latency: 50.031 ``` ## SGLang Unit Tests ``` export ANTHROPIC_API_KEY= export OPENAI_API_KEY= python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000 ``` ``` cd test/lang python3 run_all.py ```
{ "source": "simplescaling/s1", "title": "eval/rebase/sglang/docs/test_process.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/test_process.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1325 }
# ACLUE ### Paper Can Large Language Model Comprehend Ancient Chinese? A Preliminary Test on ACLUE https://arxiv.org/abs/2310.09550 The Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-scale language models on understanding ancient Chinese. The benchmark comprises 15 tasks spanning various domains, including lexical, syntactic, semantic, inference, and knowledge. ACLUE's tasks are derived from a combination of manually curated questions from publicly available resources, and automatically generated questions from classical Chinese language corpora. The range of questions span from the Xia dynasty (2070 BCE) to the Ming dynasty (1368 CE). ACLUE adopts a multiple-choice question format for all tasks. Homepage: https://github.com/isen-zhang/ACLUE ### Citation ```bibtex @inproceedings{zhang-li-2023-large, title = "Can Large Language Model Comprehend {A}ncient {C}hinese? A Preliminary Test on {ACLUE}", author = "Zhang, Yixuan and Li, Haonan", booktitle = "Proceedings of the Ancient Language Processing Workshop", month = sep, year = "2023", address = "Varna, Bulgaria", publisher = "INCOMA Ltd., Shoumen, Bulgaria", url = "https://aclanthology.org/2023.alp-1.9", pages = "80--87" } ``` ### Groups, Tags, and Tasks #### Groups - `aclue`: All 15 subjects of the ACLUE dataset, evaluated following the methodology in CMMLU's original implementation. #### Tasks The following tasks evaluate subjects in the ACLUE dataset using loglikelihood-based multiple-choice scoring: - `aclue_{subject_english}` ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? * [x] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/aclue/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aclue/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2287 }
# Arabic EXAMS ### Paper EXAMS: a resource specialized in multilingual high school exam questions. The original paper [EXAMS](https://aclanthology.org/2020.emnlp-main.438/) The Arabic EXAMS dataset includes five subjects - Islamic studies - Biology - Physics - Science - Social The original dataset [EXAMS-QA](https://github.com/mhardalov/exams-qa) EXAMS is a benchmark dataset for cross-lingual and multilingual question answering for high school examinations. With 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. EXAMS offers unique fine-grained evaluation framework across multiple languages and subjects Homepage for Arabic EXAMS: [EXAMS Arabic Homepage](https://github.com/FreedomIntelligence/AceGPT/tree/main/eval/benchmark_eval/benchmarks/EXAMS_Arabic) ### Citation ### Groups, Tags, and Tasks #### Groups - `aexams`: Arabic EXAMS dataset, including IslamicStudies, Biology, Science, Physics, Social subjects. #### Tasks The following tasks evaluate subjects in Arabic EXAMS dataset using loglikelihood-based multiple-choice scoring: - `aexams_IslamicStudies` - `aexams_Biology` - `aexams_Science` - `aexams_Physics` - `aexams_Social` ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? * [x] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/aexams/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aexams/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1895 }
# MathQA ### Paper IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models https://arxiv.org/pdf/2406.03368 IrokoBench is a human-translated benchmark dataset for 16 typologically diverse low-resource African languages covering three tasks: natural language inference (AfriXNLI), mathematical reasoning (AfriMGSM), and multi-choice knowledge-based QA (AfriMMLU). ### Citation ``` @misc{adelani2024irokobenchnewbenchmarkafrican, title={IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models}, author={David Ifeoluwa Adelani and Jessica Ojo and Israel Abebe Azime and Jian Yun Zhuang and Jesujoba O. Alabi and Xuanli He and Millicent Ochieng and Sara Hooker and Andiswa Bukula and En-Shiun Annie Lee and Chiamaka Chukwuneke and Happy Buzaaba and Blessing Sibanda and Godson Kalipe and Jonathan Mukiibi and Salomon Kabongo and Foutse Yuehgoh and Mmasibidi Setaka and Lolwethu Ndolela and Nkiruka Odu and Rooweither Mabuya and Shamsuddeen Hassan Muhammad and Salomey Osei and Sokhar Samb and Tadesse Kebede Guge and Pontus Stenetorp}, year={2024}, eprint={2406.03368}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.03368}, } ``` ### Groups and Tasks #### Groups * `afrimgsm`: All afrimgsm tasks * `afrimgsm_direct`: afrimgsm_direct evaluates models performance on the curated dataset * `afrimgsm_en_cot`: afrimgsm_en_cot includes 5-shot of exemplars for chain-of-thought approach * `afrimgsm_translate`: afrimgsm_translate evaluates models in translate-test setting #### Tasks * `afrimgsm_direct_{language_code}`: each task evaluates for one language * `afrimgsm_en_cot_{language_code}`: each task evaluates for one language * `afrimgsm_translate_{language_code}`: each task evaluates for one language ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant? * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/afrimgsm/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrimgsm/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2586 }
# MathQA ### Paper IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models https://arxiv.org/pdf/2406.03368 IrokoBench is a human-translated benchmark dataset for 16 typologically diverse low-resource African languages covering three tasks: natural language inference (AfriXNLI), mathematical reasoning (AfriMGSM), and multi-choice knowledge-based QA (AfriMMLU). ### Citation ``` @misc{adelani2024irokobenchnewbenchmarkafrican, title={IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models}, author={David Ifeoluwa Adelani and Jessica Ojo and Israel Abebe Azime and Jian Yun Zhuang and Jesujoba O. Alabi and Xuanli He and Millicent Ochieng and Sara Hooker and Andiswa Bukula and En-Shiun Annie Lee and Chiamaka Chukwuneke and Happy Buzaaba and Blessing Sibanda and Godson Kalipe and Jonathan Mukiibi and Salomon Kabongo and Foutse Yuehgoh and Mmasibidi Setaka and Lolwethu Ndolela and Nkiruka Odu and Rooweither Mabuya and Shamsuddeen Hassan Muhammad and Salomey Osei and Sokhar Samb and Tadesse Kebede Guge and Pontus Stenetorp}, year={2024}, eprint={2406.03368}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.03368}, } ``` ### Groups and Tasks #### Groups * `afrimmlu`: All afrimmlu tasks * `afrimmlu_direct`: afrimmlu_direct evaluates models performance on the curated dataset * `afrimmlu_translate`: afrimmlu_translate evaluates models in translate-test setting #### Tasks * `afrimmlu_direct_{language_code}`: each task evaluates for one language * `afrimmlu_translate_{language_code}`: each task evaluates for one language ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant? * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/afrimmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrimmlu/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2416 }
# IrokoBench ### Paper IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models https://arxiv.org/pdf/2406.03368 IrokoBench is a human-translated benchmark dataset for 16 typologically diverse low-resource African languages covering three tasks: natural language inference (AfriXNLI), mathematical reasoning (AfriMGSM), and multi-choice knowledge-based QA (AfriMMLU). ### Citation ``` @misc{adelani2024irokobenchnewbenchmarkafrican, title={IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models}, author={David Ifeoluwa Adelani and Jessica Ojo and Israel Abebe Azime and Jian Yun Zhuang and Jesujoba O. Alabi and Xuanli He and Millicent Ochieng and Sara Hooker and Andiswa Bukula and En-Shiun Annie Lee and Chiamaka Chukwuneke and Happy Buzaaba and Blessing Sibanda and Godson Kalipe and Jonathan Mukiibi and Salomon Kabongo and Foutse Yuehgoh and Mmasibidi Setaka and Lolwethu Ndolela and Nkiruka Odu and Rooweither Mabuya and Shamsuddeen Hassan Muhammad and Salomey Osei and Sokhar Samb and Tadesse Kebede Guge and Pontus Stenetorp}, year={2024}, eprint={2406.03368}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.03368}, } ``` ### Groups and Tasks #### Groups * `afrixnli`: All afrixnli tasks * `afrixnli_en_direct`: afrixnli_en_direct evaluates models performance using the anli prompt on the curated dataset * `afrixnli_native_direct`: afrixnli_native_direct evaluates models performance using the anli prompt translated to the respective languages on the curated dataset * `afrixnli_translate`: afrixnli_translate evaluates models using the anli prompt in translate-test setting * `afrixnli_manual_direct`: afrixnli_manual_direct evaluates models performance using Lai's prompt on the curated dataset * `afrixnli_manual_translate`: afrixnli_manual_translate evaluates models using Lai's prompt in translate-test setting #### Tasks * `afrixnli_en_direct_{language_code}`: each task evaluates for one language * `afrixnli_native_direct_{language_code}`: each task evaluates for one language * `afrixnli_translate_{language_code}`: each task evaluates for one language * `afrixnli_manual_direct_{language_code}`: each task evaluates for one language * `afrixnli_manual_translate_{language_code}`: each task evaluates for one language ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant? * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/afrixnli/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrixnli/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3124 }
# AGIEval ### Paper Title: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models Abstract: https://arxiv.org/abs/2304.06364.pdf AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. Homepage: https://github.com/ruixiangcui/AGIEval ### Citation ``` @misc{zhong2023agieval, title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, year={2023}, eprint={2304.06364}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` Please make sure to cite all the individual datasets in your paper when you use them. We provide the relevant citation information below: ``` @inproceedings{ling-etal-2017-program, title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems", author = "Ling, Wang and Yogatama, Dani and Dyer, Chris and Blunsom, Phil", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P17-1015", doi = "10.18653/v1/P17-1015", pages = "158--167", abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.", } @inproceedings{hendrycksmath2021, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, journal={NeurIPS}, year={2021} } @inproceedings{Liu2020LogiQAAC, title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning}, author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang}, booktitle={International Joint Conference on Artificial Intelligence}, year={2020} } @inproceedings{zhong2019jec, title={JEC-QA: A Legal-Domain Question Answering Dataset}, author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong}, booktitle={Proceedings of AAAI}, year={2020}, } @article{Wang2021FromLT, title={From LSAT: The Progress and Challenges of Complex Reasoning}, author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, year={2021}, volume={30}, pages={2201-2216} } ``` ### Groups, Tags, and Tasks #### Groups - `agieval`: Evaluates all tasks listed below. - `agieval_en`: Evaluates all English subtasks: `agieval_aqua_rat`, `agieval_gaokao_english`, `agieval_logiqa_en`, `agieval_lsat_*`, `agieval_sat_*`, `agieval_math` - `agieval_cn`: Evaluates all Chinese subtasks: `agieval_gaokao_biology`, `agieval_gaokao_chemistry`, `agieval_gaokao_chinese`, `agieval_gaokao_geography`, `agieval_gaokao_history`, `agieval_gaokao_mathqa`, `agieval_gaokao_mathcloze`, `agieval_gaokao_physics`, `agieval_jec_qa_ca`, `agieval_jec_qa_kd`, `agieval_logiqa_zh` - `agieval_nous`: Evaluates a specific subset of AGIEval tasks (multiple-choice and english-only), namely those in https://github.com/teknium1/LLM-Benchmark-Logs/blob/main/benchmark-logs/Mistral-7B-Base.md #### Tags None. #### Tasks - `agieval_aqua_rat` - `agieval_gaokao_biology` - `agieval_gaokao_chemistry` - `agieval_gaokao_chinese` - `agieval_gaokao_english` - `agieval_gaokao_geography` - `agieval_gaokao_history` - `agieval_gaokao_mathqa` - `agieval_gaokao_mathcloze` - `agieval_gaokao_physics` - `agieval_jec_qa_ca` - `agieval_jec_qa_kd` - `agieval_logiqa_en` - `agieval_logiqa_zh` - `agieval_lsat_ar` - `agieval_lsat_lr` - `agieval_lsat_rc` - `agieval_sat_en` - `agieval_sat_en_without_passage` - `agieval_sat_math` - `agieval_math`
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/agieval/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/agieval/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 5308 }
# GSM8k ## Paper Training Verifiers to Solve Math Word Problems https://arxiv.org/abs/2110.14168 State-of-the-art language models can match human performance on many tasks, but they still struggle to robustly perform multi-step mathematical reasoning. To diagnose the failures of current models and support research, we introduce GSM8K, a dataset of 8.5K high quality linguistically diverse grade school math word problems. We find that even the largest transformer models fail to achieve high test performance, despite the conceptual simplicity of this problem distribution. NOTE: See the official implementation of the task: https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for how to make use of the dataset's calculator annotations in your language model's sample/generation function. Homepage: https://github.com/openai/grade-school-math ## Citation ``` @misc{cobbe2021training, title={Training Verifiers to Solve Math Word Problems}, author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman}, year={2021}, eprint={2110.14168}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` ### Groups and Tasks #### Groups - `math_word_problems` - `chain_of_thought` - `self_consistency` #### Tasks - `gsm8k_yaml` - `gsm8k_cot`: GSM8K with Chain-of-Thought - `gsm8k_cot_self_consistency`: GSM8K with Chain-of-Thought and Self-Consistency - `gsm8k_cot_llama`: GSM8K with prompt formatting modified to conform to the evaluation settings described by Meta here: https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__gsm8k__details?row=0 - Use this task with --fewshot_as_multiturn and --apply_chat_template to replicate Meta's reported performance. ### Checklist - [x] Is in Eval-harness v1.0 ? - [ ] Has been checked for regression from v1.0? - [ ] Has been checked for equivalence with original paper methodology? - [ ] "Main" checked variant clearly denoted? ### Variant Wishlist - [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation) - [ ] Using Verifiers - [ ] Majority voting "without CoT"
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/aime/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aime/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2325 }
# ANLI ### Paper Title: `Adversarial NLI: A New Benchmark for Natural Language Understanding` Paper Link: https://arxiv.org/abs/1910.14599 Adversarial NLI (ANLI) is a dataset collected via an iterative, adversarial human-and-model-in-the-loop procedure. It consists of three rounds that progressively increase in difficulty and complexity, and each question-answer includes annotator- provided explanations. Homepage: https://github.com/facebookresearch/anli ### Citation ``` @inproceedings{nie-etal-2020-adversarial, title = "Adversarial {NLI}: A New Benchmark for Natural Language Understanding", author = "Nie, Yixin and Williams, Adina and Dinan, Emily and Bansal, Mohit and Weston, Jason and Kiela, Douwe", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", year = "2020", publisher = "Association for Computational Linguistics", } ``` ### Groups and Tasks #### Groups * `anli`: Evaluates `anli_r1`, `anli_r2`, and `anli_r3` #### Tasks * `anli_r1`: The data collected adversarially in the first round. * `anli_r2`: The data collected adversarially in the second round, after training on the previous round's data. * `anli_r3`: The data collected adversarially in the third round, after training on the previous multiple rounds of data. ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/anli/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/anli/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2041 }
# Arabic Leaderboard Title: Open Arabic LLM Leaderboard The Open Arabic LLM Leaderboard evaluates language models on a large number of different evaluation tasks that reflect the characteristics of the Arabic language and culture. The benchmark uses several datasets, most of them translated to Arabic, and validated by native Arabic speakers. They also used benchmarks from other papers or prepared benchmarks from scratch natively for Arabic. Homepage: https://huggingface.co/spaces/OALL/Open-Arabic-LLM-Leaderboard ### Citation ``` @misc{OALL, author = {Elfilali, Ali and Alobeidli, Hamza and Fourrier, Clémentine and Boussaha, Basma El Amel and Cojocaru, Ruxandra and Habib, Nathan and Hacid, Hakim}, title = {Open Arabic LLM Leaderboard}, year = {2024}, publisher = {OALL}, howpublished = "\url{https://huggingface.co/spaces/OALL/Open-Arabic-LLM-Leaderboard}" } @inproceedings{almazrouei-etal-2023-alghafa, title = "{A}l{G}hafa Evaluation Benchmark for {A}rabic Language Models", author = "Almazrouei, Ebtesam and Cojocaru, Ruxandra and Baldo, Michele and Malartic, Quentin and Alobeidli, Hamza and Mazzotta, Daniele and Penedo, Guilherme and Campesan, Giulia and Farooq, Mugariya and Alhammadi, Maitha and Launay, Julien and Noune, Badreddine", editor = "Sawaf, Hassan and El-Beltagy, Samhaa and Zaghouani, Wajdi and Magdy, Walid and Abdelali, Ahmed and Tomeh, Nadi and Abu Farha, Ibrahim and Habash, Nizar and Khalifa, Salam and Keleg, Amr and Haddad, Hatem and Zitouni, Imed and Mrini, Khalil and Almatham, Rawan", booktitle = "Proceedings of ArabicNLP 2023", month = dec, year = "2023", address = "Singapore (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.arabicnlp-1.21", doi = "10.18653/v1/2023.arabicnlp-1.21", pages = "244--275", abstract = "Recent advances in the space of Arabic large language models have opened up a wealth of potential practical applications. From optimal training strategies, large scale data acquisition and continuously increasing NLP resources, the Arabic LLM landscape has improved in a very short span of time, despite being plagued by training data scarcity and limited evaluation resources compared to English. In line with contributing towards this ever-growing field, we introduce AlGhafa, a new multiple-choice evaluation benchmark for Arabic LLMs. For showcasing purposes, we train a new suite of models, including a 14 billion parameter model, the largest monolingual Arabic decoder-only model to date. We use a collection of publicly available datasets, as well as a newly introduced HandMade dataset consisting of 8 billion tokens. Finally, we explore the quantitative and qualitative toxicity of several Arabic models, comparing our models to existing public Arabic LLMs.", } @misc{huang2023acegpt, title={AceGPT, Localizing Large Language Models in Arabic}, author={Huang Huang and Fei Yu and Jianqing Zhu and Xuening Sun and Hao Cheng and Dingjie Song and Zhihong Chen and Abdulmohsen Alharthi and Bang An and Ziche Liu and Zhiyi Zhang and Junying Chen and Jianquan Li and Benyou Wang and Lian Zhang and Ruoyu Sun and Xiang Wan and Haizhou Li and Jinchao Xu}, year={2023}, eprint={2309.12053}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{lighteval, author = {Fourrier, Clémentine and Habib, Nathan and Wolf, Thomas and Tunstall, Lewis}, title = {LightEval: A lightweight framework for LLM evaluation}, year = {2023}, version = {0.3.0}, url = {https://github.com/huggingface/lighteval} } ``` ### Groups and Tasks * `arabic_leaderboard_alghafa`: A multiple-choice evaluation benchmark for zero- and few-shot evaluation of Arabic LLMs prepared from scratch natively for Arabic. * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * You can find the list of the tasks as follows: * `arabic_leaderboard_alghafa_mcq_exams_test_ar` * `arabic_leaderboard_alghafa_meta_ar_dialects` * `arabic_leaderboard_alghafa_meta_ar_msa` * `arabic_leaderboard_alghafa_multiple_choice_facts_truefalse_balanced_task` * `arabic_leaderboard_alghafa_multiple_choice_grounded_statement_soqal_task` * `arabic_leaderboard_alghafa_multiple_choice_grounded_statement_xglue_mlqa_task` * `arabic_leaderboard_alghafa_multiple_choice_rating_sentiment_no_neutral_task` * `arabic_leaderboard_alghafa_multiple_choice_rating_sentiment_task` * `arabic_leaderboard_alghafa_multiple_choice_sentiment_task` * `arabic_leaderboard_arabic_exams`: A question answering benchmark for high school examinations in different school subjects that requires knowledge and reasoning in different languages in multiple domains. * Paper: https://aclanthology.org/2020.emnlp-main.438.pdf * `arabic_leaderboard_arabic_mmlu`: A multi-task language understanding benchmark for the Arabic language, sourced from school exams across diverse educational levels in different countries with native speakers in the region. The data comprises multiple choice questions in 40 tasks. * Paper: https://arxiv.org/pdf/2402.12840 * You can find the list of the tasks as follows: * `arabic_leaderboard_arabic_mmlu_abstract_algebra` * `arabic_leaderboard_arabic_mmlu_anatomy` * `arabic_leaderboard_arabic_mmlu_astronomy` * `arabic_leaderboard_arabic_mmlu_business_ethics` * `arabic_leaderboard_arabic_mmlu_clinical_knowledge` * `arabic_leaderboard_arabic_mmlu_college_biology` * `arabic_leaderboard_arabic_mmlu_college_chemistry` * `arabic_leaderboard_arabic_mmlu_college_computer_science` * `arabic_leaderboard_arabic_mmlu_college_mathematics` * `arabic_leaderboard_arabic_mmlu_college_medicine` * `arabic_leaderboard_arabic_mmlu_college_physics` * `arabic_leaderboard_arabic_mmlu_computer_security` * `arabic_leaderboard_arabic_mmlu_conceptual_physics` * `arabic_leaderboard_arabic_mmlu_econometrics` * `arabic_leaderboard_arabic_mmlu_electrical_engineering` * `arabic_leaderboard_arabic_mmlu_elementary_mathematics` * `arabic_leaderboard_arabic_mmlu_formal_logic` * `arabic_leaderboard_arabic_mmlu_global_facts` * `arabic_leaderboard_arabic_mmlu_high_school_biology` * `arabic_leaderboard_arabic_mmlu_high_school_chemistry` * `arabic_leaderboard_arabic_mmlu_high_school_computer_science` * `arabic_leaderboard_arabic_mmlu_high_school_european_history` * `arabic_leaderboard_arabic_mmlu_high_school_geography` * `arabic_leaderboard_arabic_mmlu_high_school_government_and_politics` * `arabic_leaderboard_arabic_mmlu_high_school_macroeconomics` * `arabic_leaderboard_arabic_mmlu_high_school_mathematics` * `arabic_leaderboard_arabic_mmlu_high_school_microeconomics` * `arabic_leaderboard_arabic_mmlu_high_school_physics` * `arabic_leaderboard_arabic_mmlu_high_school_psychology` * `arabic_leaderboard_arabic_mmlu_high_school_statistics` * `arabic_leaderboard_arabic_mmlu_high_school_us_history` * `arabic_leaderboard_arabic_mmlu_high_school_us_history` * `arabic_leaderboard_arabic_mmlu_human_aging` * `arabic_leaderboard_arabic_mmlu_human_sexuality` * `arabic_leaderboard_arabic_mmlu_international_law` * `arabic_leaderboard_arabic_mmlu_jurisprudence` * `arabic_leaderboard_arabic_mmlu_logical_fallacies` * `arabic_leaderboard_arabic_mmlu_machine_learning` * `arabic_leaderboard_arabic_mmlu_management` * `arabic_leaderboard_arabic_mmlu_marketing` * `arabic_leaderboard_arabic_mmlu_medical_genetics` * `arabic_leaderboard_arabic_mmlu_miscellaneous` * `arabic_leaderboard_arabic_mmlu_moral_disputes` * `arabic_leaderboard_arabic_mmlu_moral_scenarios` * `arabic_leaderboard_arabic_mmlu_nutrition` * `arabic_leaderboard_arabic_mmlu_philosophy` * `arabic_leaderboard_arabic_mmlu_prehistory` * `arabic_leaderboard_arabic_mmlu_professional_accounting` * `arabic_leaderboard_arabic_mmlu_professional_law` * `arabic_leaderboard_arabic_mmlu_professional_medicine` * `arabic_leaderboard_arabic_mmlu_professional_psychology` * `arabic_leaderboard_arabic_mmlu_public_relations` * `arabic_leaderboard_arabic_mmlu_security_studies` * `arabic_leaderboard_arabic_mmlu_sociology` * `arabic_leaderboard_arabic_mmlu_us_foreign_policy` * `arabic_leaderboard_arabic_mmlu_virology` * `arabic_leaderboard_arabic_mmlu_world_religions` * `arabic_leaderboard_arabic_mt_arc_challenge`: AI2 Reasoning Challenge (ARC) is a multiple-choice question task. The dataset contains only natural, grade-school science questions, written for human tests. The challenge set contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurence algorithm. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_arc_easy`: This dataset is the same as `arabic_arc_challenge`, except it is not from the challenge set. * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_boolq`: A true/false questions dataset that contains the columns passage, question, and the answer (i.e., true/false). (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_copa`: Choice Of Plausible Alternatives (COPA) is a multiple-choice question dataset, which involves open-domain commonsense causal reasoning. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_hellaswag`: The tesk is to choose the next set of sentences, based on the given candidates. The tasks involve reading comprehension and information retrieval challenges by testing the abilities of the models on basic knowledge (i.e., from 3rd grade to 9th) and commonsense inference. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_mmlu`: A multiple-choice question answering dataset from various branches of knowledge including humanities, social sciences, hard sciences, and other areas. The examples in the English dataset are translated into Arabic using ChatGPT with a translation prompt. * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_openbook_qa`: A multiple-choice openbook question answering dataset that requires external knowledge and reasoning. The open book that comes with these questions is based on elementary level science facts. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_piqa`: Physical Interaction Question Answering (PIQA) is a multiple-choice question answering based on physical commonsense reasoning. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_race`: A multiple-choice questions dataset to assess reading comprehension tasks based on English exams in China - designed for middle school and high school students (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_sciq`: A multiple-choice Science Question Answering task to assess understanding of scientific concepts about physics, chemistry, and biology. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_arabic_mt_toxigen`: This benchmark consists of tasks designed to evaluate language models and classify input text as hateful or not hateful. (machine translated benchmark - part of the Alghafa Arabic translated LLM benchmark) * Paper: https://aclanthology.org/2023.arabicnlp-1.21.pdf * `arabic_leaderboard_acva`: Arabic-Culture-Value-Alignment (ACVA) is a yes/no question dataset, generated by GPT3.5 Turbo from Arabic topics to assess model alignment with Arabic values and cultures. * Paper: https://arxiv.org/pdf/2309.12053 * You can find the list of the tasks as follows: - `arabic_leaderboard_acva_Algeria` - `arabic_leaderboard_acva_Ancient_Egypt` - `arabic_leaderboard_acva_Arab_Empire` - `arabic_leaderboard_acva_Arabic_Architecture` - `arabic_leaderboard_acva_Arabic_Art` - `arabic_leaderboard_acva_Arabic_Astronomy` - `arabic_leaderboard_acva_Arabic_Calligraphy` - `arabic_leaderboard_acva_Arabic_Ceremony` - `arabic_leaderboard_acva_Arabic_Clothing` - `arabic_leaderboard_acva_Arabic_Culture` - `arabic_leaderboard_acva_Arabic_Food` - `arabic_leaderboard_acva_Arabic_Funeral` - `arabic_leaderboard_acva_Arabic_Geography` - `arabic_leaderboard_acva_Arabic_History` - `arabic_leaderboard_acva_Arabic_Language_Origin` - `arabic_leaderboard_acva_Arabic_Literature` - `arabic_leaderboard_acva_Arabic_Math` - `arabic_leaderboard_acva_Arabic_Medicine` - `arabic_leaderboard_acva_Arabic_Music` - `arabic_leaderboard_acva_Arabic_Ornament` - `arabic_leaderboard_acva_Arabic_Philosophy` - `arabic_leaderboard_acva_Arabic_Physics_and_Chemistry` - `arabic_leaderboard_acva_Arabic_Wedding` - `arabic_leaderboard_acva_Bahrain` - `arabic_leaderboard_acva_Comoros` - `arabic_leaderboard_acva_Egypt_modern` - `arabic_leaderboard_acva_InfluenceFromAncientEgypt` - `arabic_leaderboard_acva_InfluenceFromByzantium` - `arabic_leaderboard_acva_InfluenceFromChina` - `arabic_leaderboard_acva_InfluenceFromGreece` - `arabic_leaderboard_acva_InfluenceFromIslam` - `arabic_leaderboard_acva_InfluenceFromPersia` - `arabic_leaderboard_acva_InfluenceFromRome` - `arabic_leaderboard_acva_Iraq` - `arabic_leaderboard_acva_Islam_Education` - `arabic_leaderboard_acva_Islam_branches_and_schools` - `arabic_leaderboard_acva_Islamic_law_system` - `arabic_leaderboard_acva_Jordan` - `arabic_leaderboard_acva_Kuwait` - `arabic_leaderboard_acva_Lebanon` - `arabic_leaderboard_acva_Libya` - `arabic_leaderboard_acva_Mauritania` - `arabic_acva_Mesopotamia_civilization` - `arabic_leaderboard_acva_Morocco` - `arabic_leaderboard_acva_Oman` - `arabic_leaderboard_acva_Palestine` - `arabic_leaderboard_acva_Qatar` - `arabic_leaderboard_acva_Saudi_Arabia` - `arabic_leaderboard_acva_Somalia` - `arabic_leaderboard_acva_Sudan` - `arabic_leaderboard_acva_Syria` - `arabic_leaderboard_acva_Tunisia` - `arabic_leaderboard_acva_United_Arab_Emirates` - `arabic_leaderboard_acva_Yemen` - `arabic_leaderboard_acva_communication` - `arabic_leaderboard_acva_computer_and_phone` - `arabic_leaderboard_acva_daily_life` - `arabic_leaderboard_acva_entertainment` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_complete/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_complete/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 16333 }
# Arabic Leaderboard Light Title: Open Arabic LLM Leaderboard Light This leaderboard follows all the details as in [`arabic_leaderboard_complete`](../arabic_leaderboard_complete), except that a light version - 10% random sample of the test set of each benchmark - is used to test the language models. NOTE: In ACVA benchmark, there is Yemen subset, and it is a small dataset - it has only 10 samples in the test split. So, for this specific subset dataset, to have more reliable results, we consider the original dataset, instead of 10% of its test samples. ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_light/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_light/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1240 }
# ArabicMMLU ### Paper Title: ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic Abstract: https://arxiv.org/abs/2402.12840 The focus of language model evaluation has transitioned towards reasoning and knowledge intensive tasks, driven by advancements in pretraining large models. While state-of-the-art models are partially trained on large Arabic texts, evaluating their performance in Arabic remains challenging due to the limited availability of relevant datasets. To bridge this gap, we present ArabicMMLU, the first multi-task language understanding benchmark for Arabic language, sourced from school exams across diverse educational levels in different countries spanning North Africa, the Levant, and the Gulf regions. Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA), and is carefully constructed by collaborating with native speakers in the region. Our comprehensive evaluations of 35 models reveal substantial room for improvement, particularly among the best open-source models. Notably, BLOOMZ, mT0, LLama2, and Falcon struggle to achieve a score of 50%, while even the top-performing Arabic centric model only achieves a score of 62.3%. The authors of the paper conducted studies by varying the language of the initial prompt and answer keys between English and Arabic. However, they set English initial prompts and answer keys as the standard, which is the version implemented in this task. Homepage: https://github.com/mbzuai-nlp/ArabicMMLU ### Citation ``` @misc{koto2024arabicmmlu, title={ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic}, author={Fajri Koto and Haonan Li and Sara Shatnawi and Jad Doughman and Abdelrahman Boda Sadallah and Aisha Alraeesi and Khalid Almubarak and Zaid Alyafeai and Neha Sengupta and Shady Shehata and Nizar Habash and Preslav Nakov and Timothy Baldwin}, year={2024}, eprint={2402.12840}, archivePrefix={arXiv}, primaryClass={id='cs.CL' full_name='Computation and Language' is_active=True alt_name='cmp-lg' in_archive='cs' is_general=False description='Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.'} } ``` ### Groups and Tasks #### Groups * `arabicmmlu`: evaluates all ArabicMMLU tasks. * `arabicmmlu_stem`: evaluates STEM ArabicMMLU tasks. * `arabicmmlu_stem_social_science`: evaluates social science ArabicMMLU tasks. * `arabicmmlu_stem_humanities`: evaluates humanities ArabicMMLU tasks. * `arabicmmlu_stem_language`: evaluates Arabic language ArabicMMLU tasks. * `arabicmmlu_stem_other`: evaluates other ArabicMMLU tasks.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2948 }
# ARC ### Paper Title: Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge Abstract: https://arxiv.org/abs/1803.05457 The ARC dataset consists of 7,787 science exam questions drawn from a variety of sources, including science questions provided under license by a research partner affiliated with AI2. These are text-only, English language exam questions that span several grade levels as indicated in the files. Each question has a multiple choice structure (typically 4 answer options). The questions are sorted into a Challenge Set of 2,590 “hard” questions (those that both a retrieval and a co-occurrence method fail to answer correctly) and an Easy Set of 5,197 questions. Homepage: https://allenai.org/data/arc ### Citation ``` @article{Clark2018ThinkYH, title={Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge}, author={Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord}, journal={ArXiv}, year={2018}, volume={abs/1803.05457} } ``` ### Groups, Tags, and Tasks #### Groups None. #### Tags * `ai2_arc`: Evaluates `arc_easy` and `arc_challenge` #### Tasks * `arc_easy` * `arc_challenge` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arc/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arc/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1927 }
# arc mt arc mt is an implementation of tasks to support machine translated arc challenge evals, to improve eval support across a number of additional languages. The main page for the effort is [here](https://huggingface.co/datasets/LumiOpen/arc_challenge_mt) and we will include more data and analysis there. Initial datasets include a number of European languages, and we plan to expand more in the future.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arc_mt/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arc_mt/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 411 }
# Arithmetic ### Paper Title: `Language Models are Few-Shot Learners` Abstract: https://arxiv.org/abs/2005.14165 A small battery of 10 tests that involve asking language models a simple arithmetic problem in natural language. Homepage: https://github.com/openai/gpt-3/tree/master/data ### Citation ``` @inproceedings{NEURIPS2020_1457c0d6, author = {Brown, Tom and Mann, Benjamin and Ryder, Nick and Subbiah, Melanie and Kaplan, Jared D and Dhariwal, Prafulla and Neelakantan, Arvind and Shyam, Pranav and Sastry, Girish and Askell, Amanda and Agarwal, Sandhini and Herbert-Voss, Ariel and Krueger, Gretchen and Henighan, Tom and Child, Rewon and Ramesh, Aditya and Ziegler, Daniel and Wu, Jeffrey and Winter, Clemens and Hesse, Chris and Chen, Mark and Sigler, Eric and Litwin, Mateusz and Gray, Scott and Chess, Benjamin and Clark, Jack and Berner, Christopher and McCandlish, Sam and Radford, Alec and Sutskever, Ilya and Amodei, Dario}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M. F. Balcan and H. Lin}, pages = {1877--1901}, publisher = {Curran Associates, Inc.}, title = {Language Models are Few-Shot Learners}, url = {https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf}, volume = {33}, year = {2020} } ``` ### Groups, Tags, and Tasks #### Tags * `arithmetic`: Evaluates `1dc` to `5ds` #### Tasks * `arithmetic_1dc` * `arithmetic_2da` * `arithmetic_2dm` * `arithmetic_2ds` * `arithmetic_3da` * `arithmetic_3ds` * `arithmetic_4da` * `arithmetic_4ds` * `arithmetic_5da` * `arithmetic_5ds` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/arithmetic/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arithmetic/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2340 }
# ASDiv ### Paper Title: `ASDiv: A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers` Abstract: https://arxiv.org/abs/2106.15772 ASDiv (Academia Sinica Diverse MWP Dataset) is a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers. Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types. We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty). NOTE: We currently ignore formulas for answer generation. Homepage: https://github.com/chaochun/nlu-asdiv-dataset ### Citation ``` @misc{miao2021diverse, title={A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers}, author={Shen-Yun Miao and Chao-Chun Liang and Keh-Yih Su}, year={2021}, eprint={2106.15772}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ### Groups, Tags, and Tasks #### Groups * Not part of a group yet. #### Tasks * `asdiv` * `asdiv_cot_llama`: ASDIV with prompt formatting modified to conform to the evaluation settings described by Meta here: https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__gsm8k__details?row=0 - Note that the CoT prompt from (https://arxiv.org/pdf/2201.11903) is used exactly as in GSM8k-CoT - This file is setup to run identically to the task `gsm8k_cot_llama` but for asdiv. - Use this task with --fewshot_as_multiturn and --apply_chat_template to run correctly with Llama Instruct models. ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2483 }
# bAbI ### Paper Title: Towards ai-complete question answering: A set of prerequisite toy tasks Abstract: https://arxiv.org/abs/1502.05698 One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. We believe many existing learning systems can currently not solve them, and hence our aim is to classify these tasks into skill sets, so that researchers can identify (and then rectify) the failings of their systems. We also extend and improve the recently introduced Memory Networks model, and show it is able to solve some, but not all, of the tasks. Homepage: https://github.com/facebookarchive/bAbI-tasks ### Citation ``` @article{weston2015towards, title={Towards ai-complete question answering: A set of prerequisite toy tasks}, author={Weston, Jason and Bordes, Antoine and Chopra, Sumit and Rush, Alexander M and Van Merri{\"e}nboer, Bart and Joulin, Armand and Mikolov, Tomas}, journal={arXiv preprint arXiv:1502.05698}, year={2015} } ``` ### Groups, Tags, and Tasks #### Groups * Not part of a group yet #### Tags * No tags applied. #### Tasks * `babi` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/babi/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/babi/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2300 }
# BasqueBench ### Paper BasqueBench is a benchmark for evaluating language models in Basque tasks. This is, it evaluates the ability of a language model to understand and generate Basque text. BasqueBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All the details of BasqueBench will be published in a paper soon. The new evaluation datasets included in BasqueBench are: | Task | Category | Homepage | |:-------------:|:-----:|:-----:| | MGSM_eu | Math | https://huggingface.co/datasets/HiTZ/MGSM-eu | | WNLI_eu | Natural Language Inference | https://huggingface.co/datasets/HiTZ/wnli-eu | | XCOPA_eu | Commonsense Reasoning | https://huggingface.co/datasets/HiTZ/XCOPA-eu | The datasets included in BasqueBench that have been made public in previous pubications are: | Task | Category | Paper title | Homepage | |:-------------:|:-----:|:-------------:|:-----:| | Belebele_eu | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele | | EusExams | Question Answering | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusExams | | EusProficiency | Question Answering | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusProficiency | | EusReading | Reading Comprehension | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusReading | | EusTrivia | Question Answering | [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/abs/2403.20266) | https://huggingface.co/datasets/HiTZ/EusTrivia | | FLORES_eu | Translation | [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) | https://huggingface.co/datasets/facebook/flores | | QNLIeu | Natural Language Inference | [BasqueGLUE: A Natural Language Understanding Benchmark for Basque](https://aclanthology.org/2022.lrec-1.172/) | https://huggingface.co/datasets/orai-nlp/basqueGLUE | | XNLIeu | Natural Language Inference | [XNLIeu: a dataset for cross-lingual NLI in Basque](https://arxiv.org/abs/2404.06996) | https://huggingface.co/datasets/HiTZ/xnli-eu | | XStoryCloze_eu | Commonsense Reasoning | [Few-shot Learning with Multilingual Generative Language Models](https://aclanthology.org/2022.emnlp-main.616/) | https://huggingface.co/datasets/juletxara/xstory_cloze | ### Citation Paper for BasqueBench coming soon. ### Groups and Tasks #### Groups - `basque_bench`: All tasks included in BasqueBench. - `flores_eu`: All FLORES translation tasks from or to Basque. #### Tasks The following tasks evaluate tasks on BasqueBench dataset using various scoring methods. - `belebele_eus_Latn` - `eus_exams_eu` - `eus_proficiency` - `eus_reading` - `eus_trivia` - `flores_eu` - `flores_eu-ca` - `flores_eu-de` - `flores_eu-en` - `flores_eu-es` - `flores_eu-fr` - `flores_eu-gl` - `flores_eu-it` - `flores_eu-pt` - `flores_ca-eu` - `flores_de-eu` - `flores_en-eu` - `flores_es-eu` - `flores_fr-eu` - `flores_gl-eu` - `flores_it-eu` - `flores_pt-eu` - `mgsm_direct_eu` - `mgsm_native_cot_eu` - `qnlieu` - `wnli_eu` - `xcopa_eu` - `xnli_eu` - `xnli_eu_native` - `xstorycloze_eu` Some of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are: - `belebele_eus_Latn`: Belebele Basque - `qnlieu`: From BasqueGLUE ### Checklist * [x] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? * [ ] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/basque_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/basque_bench/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4364 }
# BasqueGLUE ### Paper Title: `BasqueGLUE: A Natural Language Understanding Benchmark for Basque` Abstract: `https://aclanthology.org/2022.lrec-1.172/` Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license. Homepage: `https://github.com/orai-nlp/BasqueGLUE` Title: `Latxa: An Open Language Model and Evaluation Suite for Basque` Abstract: `https://arxiv.org/abs/2403.20266` The use of BasqueGLUE for evaluating the performance of decoder models in Basque is presented in this paper. Homepage: `https://github.com/hitz-zentroa/latxa` ### Citation ``` @InProceedings{urbizu2022basqueglue, author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor}, title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {1603--1612}, url = {https://aclanthology.org/2022.lrec-1.172} } @misc{etxaniz2024latxa, title={Latxa: An Open Language Model and Evaluation Suite for Basque}, author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, year={2024}, eprint={2403.20266}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups, Tags, and Tasks #### Groups None. #### Tags * `basque-glue`: First version of the implementation. Calls all subtasks, but does not average. #### Tasks * `bhtc_v2`: Topic classification of news extracts with 12 categories. * `bec2016eu`: Sentiment analysis on tweets about the campaign for the 2016 Basque elections. * `vaxx_stance`: Stance detection on tweets around the anti-vaccine movement. * `qnlieu`: Q&A NLI as in [glue/qnli](../glue/qnli). * `wiceu`: Word-in-Context as in [super_glue/wic](../super_glue/wic). * `epec_koref_bin`: Correference detection as in [super_glue/wsc](../super_glue/wsc). ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3712 }
# BigBenchHard ## Paper Title: `Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them` Abstract: https://arxiv.org/abs/2210.09261 A suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. Homepage: https://github.com/suzgunmirac/BIG-Bench-Hard ## Citation ``` @article{suzgun2022challenging, title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them}, author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason}, journal={arXiv preprint arXiv:2210.09261}, year={2022} } ``` ### Groups, Tags, and Tasks #### Groups - `bbh`: is the same as `bbh_cot_fewshot`. - `bbh_zeroshot` - `bbh_fewshot` - `bbh_cot_fewshot` - `bbh_cot_zeroshot` #### Tags None. #### Tasks - ... ### Checklist - [x] Is in Eval-harness v1.0 ? - [ ] Has been checked for regression from v1.0? - [ ] Has been checked for equivalence with original paper methodology? - [ ] "Main" checked variant clearly denoted? ### Variant Wishlist - [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation) - [ ] Using Verifiers - [ ] Majority voting "without CoT"
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/bbh/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bbh/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1450 }
# Belebele ### Paper The Belebele Benchmark for Massively Multilingual NLU Evaluation https://arxiv.org/abs/2308.16884 Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the FLORES-200 dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems. Homepage: https://github.com/facebookresearch/belebele ### Citation ```bibtex @misc{bandarkar2023belebele, title={The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants}, author={Lucas Bandarkar and Davis Liang and Benjamin Muller and Mikel Artetxe and Satya Narayan Shukla and Donald Husa and Naman Goyal and Abhinandan Krishnan and Luke Zettlemoyer and Madian Khabsa}, year={2023}, eprint={2308.16884}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups - `belebele`: All 122 languages of the Belebele dataset, evaluated following the methodology in MMLU's original implementation. #### Tasks The following tasks evaluate languages in the Belebele dataset using loglikelihood-based multiple-choice scoring: - `belebele_{language}` The variant evaluated here is the 0-shot or few-shot evaluation with English Instructions. ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? * [ ] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/belebele/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/belebele/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2577 }
# BertaQA ### Paper Title: BertaQA: How Much Do Language Models Know About Local Culture? Abstract: https://arxiv.org/abs/2406.07302 Large Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how well these models perform on topics relevant to other cultures, whose presence on the web is not that prominent. To address this gap, we introduce BertaQA, a multiple-choice trivia dataset that is parallel in English and Basque. The dataset consists of a local subset with questions pertinent to the Basque culture, and a global subset with questions of broader interest. We find that state-of-the-art LLMs struggle with local cultural knowledge, even as they excel on global topics. However, we show that continued pre-training in Basque significantly improves the models' performance on Basque culture, even when queried in English. To our knowledge, this is the first solid evidence of knowledge transfer from a low-resource to a high-resource language. Our analysis sheds light on the complex interplay between language and knowledge, and reveals that some prior findings do not fully hold when reassessed on local topics. Our dataset and evaluation code are available under open licenses at https://github.com/juletx/BertaQA. Homepage: https://github.com/juletx/BertaQA ### Citation ``` @misc{etxaniz2024bertaqa, title={BertaQA: How Much Do Language Models Know About Local Culture?}, author={Julen Etxaniz and Gorka Azkune and Aitor Soroa and Oier Lopez de Lacalle and Mikel Artetxe}, year={2024}, eprint={2406.07302}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups - `bertaqa`: Group of BertaQA tasks. #### Tasks - `bertaqa_eu`: Trivia questions in Basque. - `bertaqa_en`: Trivia questions in English, human-translated from Basque. - `bertaqa_en_mt_*`: Trivia questions in English, machine-translated from Basque with different models. ### Checklist For adding novel benchmarks/datasets to the library: - [ ] Is the task an existing benchmark in the literature? - [ ] Have you referenced the original paper that introduced the task? - [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: - [ ] Is the "Main" variant of this task clearly denoted? - [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? - [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/bertaqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bertaqa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2721 }
# BigBench ### Paper Title: `Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models` Abstract: https://arxiv.org/abs/2206.04615 The Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark intended to probe large language models and extrapolate their future capabilities. Homepage: https://github.com/google/BIG-bench ### Citation ``` @misc{srivastava2022imitation, title={Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models}, author={Aarohi Srivastava and Abhinav Rastogi and Abhishek Rao and Abu Awal Md Shoeb and Abubakar Abid and Adam Fisch and Adam R. Brown and Adam Santoro and Aditya Gupta and Adrià Garriga-Alonso and Agnieszka Kluska and Aitor Lewkowycz and Akshat Agarwal and Alethea Power and Alex Ray and Alex Warstadt and Alexander W. Kocurek and Ali Safaya and Ali Tazarv and Alice Xiang and Alicia Parrish and Allen Nie and Aman Hussain and Amanda Askell and Amanda Dsouza and Ambrose Slone and Ameet Rahane and Anantharaman S. Iyer and Anders Andreassen and Andrea Madotto and Andrea Santilli and Andreas Stuhlmüller and Andrew Dai and Andrew La and Andrew Lampinen and Andy Zou and Angela Jiang and Angelica Chen and Anh Vuong and Animesh Gupta and Anna Gottardi and Antonio Norelli and Anu Venkatesh and Arash Gholamidavoodi and Arfa Tabassum and Arul Menezes and Arun Kirubarajan and Asher Mullokandov and Ashish Sabharwal and Austin Herrick and Avia Efrat and Aykut Erdem and Ayla Karakaş and B. Ryan Roberts and Bao Sheng Loe and Barret Zoph and Bartłomiej Bojanowski and Batuhan Özyurt and Behnam Hedayatnia and Behnam Neyshabur and Benjamin Inden and Benno Stein and Berk Ekmekci and Bill Yuchen Lin and Blake Howald and Cameron Diao and Cameron Dour and Catherine Stinson and Cedrick Argueta and César Ferri Ramírez and Chandan Singh and Charles Rathkopf and Chenlin Meng and Chitta Baral and Chiyu Wu and Chris Callison-Burch and Chris Waites and Christian Voigt and Christopher D. Manning and Christopher Potts and Cindy Ramirez and Clara E. Rivera and Clemencia Siro and Colin Raffel and Courtney Ashcraft and Cristina Garbacea and Damien Sileo and Dan Garrette and Dan Hendrycks and Dan Kilman and Dan Roth and Daniel Freeman and Daniel Khashabi and Daniel Levy and Daniel Moseguí González and Danielle Perszyk and Danny Hernandez and Danqi Chen and Daphne Ippolito and Dar Gilboa and David Dohan and David Drakard and David Jurgens and Debajyoti Datta and Deep Ganguli and Denis Emelin and Denis Kleyko and Deniz Yuret and Derek Chen and Derek Tam and Dieuwke Hupkes and Diganta Misra and Dilyar Buzan and Dimitri Coelho Mollo and Diyi Yang and Dong-Ho Lee and Ekaterina Shutova and Ekin Dogus Cubuk and Elad Segal and Eleanor Hagerman and Elizabeth Barnes and Elizabeth Donoway and Ellie Pavlick and Emanuele Rodola and Emma Lam and Eric Chu and Eric Tang and Erkut Erdem and Ernie Chang and Ethan A. Chi and Ethan Dyer and Ethan Jerzak and Ethan Kim and Eunice Engefu Manyasi and Evgenii Zheltonozhskii and Fanyue Xia and Fatemeh Siar and Fernando Martínez-Plumed and Francesca Happé and Francois Chollet and Frieda Rong and Gaurav Mishra and Genta Indra Winata and Gerard de Melo and Germán Kruszewski and Giambattista Parascandolo and Giorgio Mariani and Gloria Wang and Gonzalo Jaimovitch-López and Gregor Betz and Guy Gur-Ari and Hana Galijasevic and Hannah Kim and Hannah Rashkin and Hannaneh Hajishirzi and Harsh Mehta and Hayden Bogar and Henry Shevlin and Hinrich Schütze and Hiromu Yakura and Hongming Zhang and Hugh Mee Wong and Ian Ng and Isaac Noble and Jaap Jumelet and Jack Geissinger and Jackson Kernion and Jacob Hilton and Jaehoon Lee and Jaime Fernández Fisac and James B. Simon and James Koppel and James Zheng and James Zou and Jan Kocoń and Jana Thompson and Jared Kaplan and Jarema Radom and Jascha Sohl-Dickstein and Jason Phang and Jason Wei and Jason Yosinski and Jekaterina Novikova and Jelle Bosscher and Jennifer Marsh and Jeremy Kim and Jeroen Taal and Jesse Engel and Jesujoba Alabi and Jiacheng Xu and Jiaming Song and Jillian Tang and Joan Waweru and John Burden and John Miller and John U. Balis and Jonathan Berant and Jörg Frohberg and Jos Rozen and Jose Hernandez-Orallo and Joseph Boudeman and Joseph Jones and Joshua B. Tenenbaum and Joshua S. Rule and Joyce Chua and Kamil Kanclerz and Karen Livescu and Karl Krauth and Karthik Gopalakrishnan and Katerina Ignatyeva and Katja Markert and Kaustubh D. Dhole and Kevin Gimpel and Kevin Omondi and Kory Mathewson and Kristen Chiafullo and Ksenia Shkaruta and Kumar Shridhar and Kyle McDonell and Kyle Richardson and Laria Reynolds and Leo Gao and Li Zhang and Liam Dugan and Lianhui Qin and Lidia Contreras-Ochando and Louis-Philippe Morency and Luca Moschella and Lucas Lam and Lucy Noble and Ludwig Schmidt and Luheng He and Luis Oliveros Colón and Luke Metz and Lütfi Kerem Şenel and Maarten Bosma and Maarten Sap and Maartje ter Hoeve and Maheen Farooqi and Manaal Faruqui and Mantas Mazeika and Marco Baturan and Marco Marelli and Marco Maru and Maria Jose Ramírez Quintana and Marie Tolkiehn and Mario Giulianelli and Martha Lewis and Martin Potthast and Matthew L. Leavitt and Matthias Hagen and Mátyás Schubert and Medina Orduna Baitemirova and Melody Arnaud and Melvin McElrath and Michael A. Yee and Michael Cohen and Michael Gu and Michael Ivanitskiy and Michael Starritt and Michael Strube and Michał Swędrowski and Michele Bevilacqua and Michihiro Yasunaga and Mihir Kale and Mike Cain and Mimee Xu and Mirac Suzgun and Mo Tiwari and Mohit Bansal and Moin Aminnaseri and Mor Geva and Mozhdeh Gheini and Mukund Varma T and Nanyun Peng and Nathan Chi and Nayeon Lee and Neta Gur-Ari Krakover and Nicholas Cameron and Nicholas Roberts and Nick Doiron and Nikita Nangia and Niklas Deckers and Niklas Muennighoff and Nitish Shirish Keskar and Niveditha S. Iyer and Noah Constant and Noah Fiedel and Nuan Wen and Oliver Zhang and Omar Agha and Omar Elbaghdadi and Omer Levy and Owain Evans and Pablo Antonio Moreno Casares and Parth Doshi and Pascale Fung and Paul Pu Liang and Paul Vicol and Pegah Alipoormolabashi and Peiyuan Liao and Percy Liang and Peter Chang and Peter Eckersley and Phu Mon Htut and Pinyu Hwang and Piotr Miłkowski and Piyush Patil and Pouya Pezeshkpour and Priti Oli and Qiaozhu Mei and Qing Lyu and Qinlang Chen and Rabin Banjade and Rachel Etta Rudolph and Raefer Gabriel and Rahel Habacker and Ramón Risco Delgado and Raphaël Millière and Rhythm Garg and Richard Barnes and Rif A. Saurous and Riku Arakawa and Robbe Raymaekers and Robert Frank and Rohan Sikand and Roman Novak and Roman Sitelew and Ronan LeBras and Rosanne Liu and Rowan Jacobs and Rui Zhang and Ruslan Salakhutdinov and Ryan Chi and Ryan Lee and Ryan Stovall and Ryan Teehan and Rylan Yang and Sahib Singh and Saif M. Mohammad and Sajant Anand and Sam Dillavou and Sam Shleifer and Sam Wiseman and Samuel Gruetter and Samuel R. Bowman and Samuel S. Schoenholz and Sanghyun Han and Sanjeev Kwatra and Sarah A. Rous and Sarik Ghazarian and Sayan Ghosh and Sean Casey and Sebastian Bischoff and Sebastian Gehrmann and Sebastian Schuster and Sepideh Sadeghi and Shadi Hamdan and Sharon Zhou and Shashank Srivastava and Sherry Shi and Shikhar Singh and Shima Asaadi and Shixiang Shane Gu and Shubh Pachchigar and Shubham Toshniwal and Shyam Upadhyay and Shyamolima and Debnath and Siamak Shakeri and Simon Thormeyer and Simone Melzi and Siva Reddy and Sneha Priscilla Makini and Soo-Hwan Lee and Spencer Torene and Sriharsha Hatwar and Stanislas Dehaene and Stefan Divic and Stefano Ermon and Stella Biderman and Stephanie Lin and Stephen Prasad and Steven T. Piantadosi and Stuart M. Shieber and Summer Misherghi and Svetlana Kiritchenko and Swaroop Mishra and Tal Linzen and Tal Schuster and Tao Li and Tao Yu and Tariq Ali and Tatsu Hashimoto and Te-Lin Wu and Théo Desbordes and Theodore Rothschild and Thomas Phan and Tianle Wang and Tiberius Nkinyili and Timo Schick and Timofei Kornev and Timothy Telleen-Lawton and Titus Tunduny and Tobias Gerstenberg and Trenton Chang and Trishala Neeraj and Tushar Khot and Tyler Shultz and Uri Shaham and Vedant Misra and Vera Demberg and Victoria Nyamai and Vikas Raunak and Vinay Ramasesh and Vinay Uday Prabhu and Vishakh Padmakumar and Vivek Srikumar and William Fedus and William Saunders and William Zhang and Wout Vossen and Xiang Ren and Xiaoyu Tong and Xinran Zhao and Xinyi Wu and Xudong Shen and Yadollah Yaghoobzadeh and Yair Lakretz and Yangqiu Song and Yasaman Bahri and Yejin Choi and Yichi Yang and Yiding Hao and Yifu Chen and Yonatan Belinkov and Yu Hou and Yufang Hou and Yuntao Bai and Zachary Seid and Zhuoye Zhao and Zijian Wang and Zijie J. Wang and Zirui Wang and Ziyi Wu}, year={2022}, eprint={2206.04615}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups * `group_name`: `Short description` #### Tasks * `task_name`: `1-sentence description of what this particular task does` * `task_name2`: ... ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/bigbench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bigbench/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 9741 }
# Task-name ### Paper Title: `BLiMP: A Benchmark of Linguistic Minimal Pairs for English` Abstract: `https://arxiv.org/abs/1912.00582` BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars. Homepage: https://github.com/alexwarstadt/blimp ### Citation ``` @article{warstadt2019blimp, author = {Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei and Wang, Sheng-Fu and Bowman, Samuel R.}, title = {BLiMP: The Benchmark of Linguistic Minimal Pairs for English}, journal = {Transactions of the Association for Computational Linguistics}, volume = {8}, number = {}, pages = {377-392}, year = {2020}, doi = {10.1162/tacl\_a\_00321}, URL = {https://doi.org/10.1162/tacl_a_00321}, eprint = {https://doi.org/10.1162/tacl_a_00321}, abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. } } ``` ### Subtasks List or describe tasks defined in this folder, and their names here: * `task_name`: `1-sentence description of what this particular task does` * `task_name2`: ..... ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/blimp/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/blimp/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2917 }
# CatalanBench ### Paper CatalanBench is a benchmark for evaluating language models in Catalan tasks. This is, it evaluates the ability of a language model to understand and generate Catalan text. CatalanBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All the details of CatalanBench will be published in a paper soon. The new evaluation datasets included in CatalanBench are: | Task | Category | Homepage | |:-------------:|:-----:|:-----:| | ARC_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/arc_ca | | MGSM_ca | Math | https://huggingface.co/datasets/projecte-aina/mgsm_ca | | OpenBookQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/openbookqa_ca | | Parafraseja | Paraphrasing | https://huggingface.co/datasets/projecte-aina/Parafraseja | | PIQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/piqa_ca | | SIQA_ca | Question Answering | https://huggingface.co/datasets/projecte-aina/siqa_ca | | XStoryCloze_ca | Commonsense Reasoning | https://huggingface.co/datasets/projecte-aina/xstorycloze_ca | The datasets included in CatalanBench that have been made public in previous pubications are: | Task | Category | Paper title | Homepage | |:-------------:|:-----:|:-------------:|:-----:| | Belebele_ca | Reading Comprehension | [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884) | https://huggingface.co/datasets/facebook/belebele | | caBREU | Summarization | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/caBreu | | CatalanQA | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/catalanqa | | CatCoLA | Linguistic Acceptability | CatCoLA: Catalan Corpus of Linguistic Acceptability | https://huggingface.co/datasets/nbel/CatCoLA | | COPA-ca | Commonsense Reasoning | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/COPA-ca | | CoQCat | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/CoQCat | | FLORES_ca | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores | | PAWS-ca | Paraphrasing | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/PAWS-ca | | TE-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/teca | | VeritasQA_ca | Truthfulness | VeritasQA: A Truthfulness Benchmark Aimed at Multilingual Transferability | TBA | | WNLI-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/wnli-ca | | XNLI-ca | Natural Language Inference | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/xnli-ca | | XQuAD-ca | Question Answering | [Building a Data Infrastructure for a Mid-Resource Language: The Case of Catalan](https://aclanthology.org/2024.lrec-main.231/) | https://huggingface.co/datasets/projecte-aina/xquad-ca | ### Citation Paper for CatalanBench coming soon. <!--```bibtex @inproceedings{baucells-2024-iberobench, title = "IberoBench: A Benchmark for LLM Evaluation in Iberian Languages", author = "Baucells, Irene and AUTHORS, ADD", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", year = "2024", publisher = "Association for Computational Linguistics", } ``` --> ### Groups and Tasks #### Groups - `catalan_bench`: All tasks included in CatalanBench. - `flores_ca`: All FLORES translation tasks from or to Catalan. #### Tags - `cabreu`: Three CaBREU tasks for each type of summary (extractive, abstractive and extreme). - `phrases_va`: Two Phrases_va tasks for language adaptation between Catalan and Valencian. #### Tasks The following tasks evaluate tasks on CatalanBench dataset using various scoring methods. - `arc_ca_challenge` - `arc_ca_easy` - `belebele_cat_Latn` - `cabreu` - `catalanqa` - `catcola` - `copa_ca` - `coqcat` - `flores_ca` - `flores_ca-de` - `flores_ca-en` - `flores_ca-es` - `flores_ca-eu` - `flores_ca-fr` - `flores_ca-gl` - `flores_ca-it` - `flores_ca-pt` - `flores_de-ca` - `flores_en-ca` - `flores_es-ca` - `flores_eu-ca` - `flores_fr-ca` - `flores_gl-ca` - `flores_it-ca` - `flores_pt-ca` - `mgsm_direct_ca` - `openbookqa_ca` - `parafraseja` - `paws_ca` - `phrases_ca` - `piqa_ca` - `siqa_ca` - `teca` - `veritasqa_gen_ca` - `veritasqa_mc1_ca` - `veritasqa_mc2_ca` - `wnli_ca` - `xnli_ca` - `xquad_ca` - `xstorycloze_ca` Some of these tasks are taken from benchmarks already available in LM Evaluation Harness. These are: - `belebele_cat_Latn`: Belebele Catalan ### Checklist * [x] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? * [ ] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/catalan_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/catalan_bench/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6403 }
# C-Eval (Validation) ### Paper C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models https://arxiv.org/pdf/2305.08322.pdf C-Eval is a comprehensive Chinese evaluation suite for foundation models. It consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty levels. Homepage: https://cevalbenchmark.com/ ### Citation ```bibtex @article{huang2023ceval, title={C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models}, author={Huang, Yuzhen and Bai, Yuzhuo and Zhu, Zhihao and Zhang, Junlei and Zhang, Jinghan and Su, Tangjun and Liu, Junteng and Lv, Chuancheng and Zhang, Yikai and Lei, Jiayi and Fu, Yao and Sun, Maosong and He, Junxian}, journal={arXiv preprint arXiv:2305.08322}, year={2023} } ``` SUBJECTS = { "computer_network":"计算机网络", "operating_system":"操作系统", "computer_architecture":"计算机组成", "college_programming":"大学编程", "college_physics":"大学物理", "college_chemistry":"大学化学", "advanced_mathematics":"高等数学", "probability_and_statistics":"概率统计", "discrete_mathematics":"离散数学", "electrical_engineer":"注册电气工程师", "metrology_engineer":"注册计量师", "high_school_mathematics":"高中数学", "high_school_physics":"高中物理", "high_school_chemistry":"高中化学", "high_school_biology":"高中生物", "middle_school_mathematics":"初中数学", "middle_school_biology":"初中生物", "middle_school_physics":"初中物理", "middle_school_chemistry":"初中化学", "veterinary_medicine":"兽医学", "college_economics":"大学经济学", "business_administration":"工商管理", "marxism":"马克思主义基本原理", "mao_zedong_thought":"毛泽东思想和中国特色社会主义理论体系概论", "education_science":"教育学", "teacher_qualification":"教师资格", "high_school_politics":"高中政治", "high_school_geography":"高中地理", "middle_school_politics":"初中政治", "middle_school_geography":"初中地理", "modern_chinese_history":"近代史纲要", "ideological_and_moral_cultivation":"思想道德修养与法律基础", "logic":"逻辑学", "law":"法学", "chinese_language_and_literature":"中国语言文学", "art_studies":"艺术学", "professional_tour_guide":"导游资格", "legal_professional":"法律职业资格", "high_school_chinese":"高中语文", "high_school_history":"高中历史", "middle_school_history":"初中历史", "civil_servant":"公务员", "sports_science":"体育学", "plant_protection":"植物保护", "basic_medicine":"基础医学", "clinical_medicine":"临床医学", "urban_and_rural_planner":"注册城乡规划师", "accountant":"注册会计师", "fire_engineer":"注册消防工程师", "environmental_impact_assessment_engineer":"环境影响评价工程师", "tax_accountant":"税务师", "physician":"医师资格" } # CMMLU ### Paper CMMLU: Measuring massive multitask language understanding in Chinese https://arxiv.org/abs/2306.09212 CMMLU is a comprehensive evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of Chinese language and culture. CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. Homepage: https://github.com/haonan-li/CMMLU ### Citation ```bibtex @misc{li2023cmmlu, title={CMMLU: Measuring massive multitask language understanding in Chinese}, author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin}, year={2023}, eprint={2306.09212}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups - `ceval-valid`: All 52 subjects of the C-Eval dataset, evaluated following the methodology in MMLU's original implementation. This implementation consists solely of the validation set of C-Eval, as the test set requires submission of model predictions to an external site. #### Tasks The following tasks evaluate subjects in the C-Eval dataset using loglikelihood-based multiple-choice scoring: - `ceval-valid_{subject_english}` ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/ceval/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/ceval/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4464 }
# CMMLU ### Paper CMMLU: Measuring massive multitask language understanding in Chinese https://arxiv.org/abs/2306.09212 CMMLU is a comprehensive evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of Chinese language and culture. CMMLU covers a wide range of subjects, comprising 67 topics that span from elementary to advanced professional levels. Homepage: https://github.com/haonan-li/CMMLU ### Citation ```bibtex @misc{li2023cmmlu, title={CMMLU: Measuring massive multitask language understanding in Chinese}, author={Haonan Li and Yixuan Zhang and Fajri Koto and Yifei Yang and Hai Zhao and Yeyun Gong and Nan Duan and Timothy Baldwin}, year={2023}, eprint={2306.09212}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups - `cmmlu`: All 67 subjects of the CMMLU dataset, evaluated following the methodology in MMLU's original implementation. #### Tasks The following tasks evaluate subjects in the CMMLU dataset using loglikelihood-based multiple-choice scoring: - `cmmlu_{subject_english}` ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? * [x] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/cmmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/cmmlu/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1747 }
# Task-name ### Paper Title: `COMMONSENSEQA: A Question Answering Challenge Targeting Commonsense Knowledge` Abstract: https://arxiv.org/pdf/1811.00937.pdf CommonsenseQA is a multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers. It contains 12,102 questions with one correct answer and four distractor answers. Homepage: https://www.tau-nlp.org/commonsenseqa ### Citation ``` @inproceedings{talmor-etal-2019-commonsenseqa, title = "{C}ommonsense{QA}: A Question Answering Challenge Targeting Commonsense Knowledge", author = "Talmor, Alon and Herzig, Jonathan and Lourie, Nicholas and Berant, Jonathan", booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)", month = jun, year = "2019", address = "Minneapolis, Minnesota", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/N19-1421", doi = "10.18653/v1/N19-1421", pages = "4149--4158", archivePrefix = "arXiv", eprint = "1811.00937", primaryClass = "cs", } ``` ### Groups and Tasks #### Groups * Not part of a group yet. #### Tasks * `commonsense_qa`: Represents the "random" split from the paper. Uses an MMLU-style prompt, as (presumably) used by Llama evaluations. ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/commonsense_qa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/commonsense_qa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2145 }
# COPAL ### Paper Title: `COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances` Abstract: `https://arxiv.org/abs/2311.01012` `COPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the Indonesian (especially Jakartan) cultural sphere. Professionally written and validatid from scratch by natives, COPAL-ID is more fluent and free from awkward phrases, unlike the translated XCOPA-ID.` Homepage: `https://github.com/haryoa/copal-id` ### Citation ``` @article{wibowo2023copal, title={COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances}, author={Wibowo, Haryo Akbarianto and Fuadi, Erland Hilman and Nityasya, Made Nindyatama and Prasojo, Radityo Eko and Aji, Alham Fikri}, journal={arXiv preprint arXiv:2311.01012}, year={2023} } ``` ### Groups and Tasks #### Groups * `copal_id` #### Tasks * `copal_id_standard`: `Standard version of COPAL dataset, use formal language and less local nuances` * `copal_id_colloquial`: `Colloquial version of COPAL dataset, use informal language and more local nuances` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/copal_id/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/copal_id/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1851 }
# CoQA ### Paper Title: `CoQA: A Conversational Question Answering Challenge` Abstract: https://arxiv.org/pdf/1808.07042.pdf CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation. Homepage: https://stanfordnlp.github.io/coqa/ ### Citation ``` BibTeX-formatted citation goes here ``` ### Groups and Tasks #### Groups * Not part of a group yet #### Tasks * `coqa` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/coqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/coqa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1261 }
# CrowS-Pairs ### Paper CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models https://aclanthology.org/2020.emnlp-main.154/ French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English https://aclanthology.org/2022.acl-long.583/ CrowS-Pairs is a challenge set for evaluating what language models (LMs) on their tendency to generate biased outputs. CrowS-Pairs comes in 2 languages and the English subset has a newer version which fixes some of the issues with the original version. Homepage: https://github.com/nyu-mll/crows-pairs, https://gitlab.inria.fr/french-crows-pairs ### Citation ```bibtex @inproceedings{nangia-etal-2020-crows, title = "{C}row{S}-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models", author = "Nangia, Nikita and Vania, Clara and Bhalerao, Rasika and Bowman, Samuel R.", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.emnlp-main.154", doi = "10.18653/v1/2020.emnlp-main.154", pages = "1953--1967", abstract = "Pretrained language models, especially masked language models (MLMs) have seen success across many NLP tasks. However, there is ample evidence that they use the cultural biases that are undoubtedly present in the corpora they are trained on, implicitly creating harm with biased representations. To measure some forms of social bias in language models against protected demographic groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. We find that all three of the widely-used MLMs we evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs. As work on building less biased models advances, this dataset can be used as a benchmark to evaluate progress.", } @inproceedings{neveol-etal-2022-french, title = "{F}rench {C}row{S}-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than {E}nglish", author = {N{\'e}v{\'e}ol, Aur{\'e}lie and Dupont, Yoann and Bezan{\c{c}}on, Julien and Fort, Kar{\"e}n}, booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.583", doi = "10.18653/v1/2022.acl-long.583", pages = "8521--8531", abstract = "Warning: This paper contains explicit statements of offensive stereotypes which may be upsetting.Much work on biases in natural language processing has addressed biases linked to the social and cultural experience of English speaking individuals in the United States. We seek to widen the scope of bias studies by creating material to measure social bias in language models (LMs) against specific demographic groups in France. We build on the US-centered CrowS-pairs dataset to create a multilingual stereotypes dataset that allows for comparability across languages while also characterizing biases that are specific to each country and language. We introduce 1,679 sentence pairs in French that cover stereotypes in ten types of bias like gender and age. 1,467 sentence pairs are translated from CrowS-pairs and 212 are newly crowdsourced. The sentence pairs contrast stereotypes concerning underadvantaged groups with the same sentence concerning advantaged groups. We find that four widely used language models (three French, one multilingual) favor sentences that express stereotypes in most bias categories. We report on the translation process from English into French, which led to a characterization of stereotypes in CrowS-pairs including the identification of US-centric cultural traits. We offer guidelines to further extend the dataset to other languages and cultural environments.", } ``` ### Groups and Tasks #### Groups - `crows_pairs_english`: The entire English subset of the CrowS-Pairs dataset. - `crows_pairs_french`: The entire French subset of the CrowS-Pairs dataset. #### Tasks The following tasks evaluate sub-areas of bias in the English CrowS-Pairs dataset: - `crows_pairs_english_age` - `crows_pairs_english_autre` - `crows_pairs_english_disability` - `crows_pairs_english_gender` - `crows_pairs_english_nationality` - `crows_pairs_english_physical_appearance` - `crows_pairs_english_race_color` - `crows_pairs_english_religion` - `crows_pairs_english_sexual_orientation` - `crows_pairs_english_socioeconomic` The following tasks evaluate sub-areas of bias in the French CrowS-Pairs dataset: - `crows_pairs_french_age` - `crows_pairs_french_autre` - `crows_pairs_french_disability` - `crows_pairs_french_gender` - `crows_pairs_french_nationality` - `crows_pairs_french_physical_appearance` - `crows_pairs_french_race_color` - `crows_pairs_french_religion` - `crows_pairs_french_sexual_orientation` - `crows_pairs_french_socioeconomic` All tasks evaluate the percentage of more-stereotypical sentences that are rated as more likely by a model than the non-stereotypical sentences (`pct_stereotype`), as well as the average absolute difference of loglikelihoods between the sentences in the pairs. ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? * [x] The original paper does not for causal language models, so this is a novel formulation of the task for autoregressive LMs. If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/crows_pairs/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/crows_pairs/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 6561 }
# DROP ### Paper Title: `DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs` Abstract: https://aclanthology.org/attachments/N19-1246.Supplementary.pdf DROP is a QA dataset which tests comprehensive understanding of paragraphs. In this crowdsourced, adversarially-created, 96k question-answering benchmark, a system must resolve multiple references in a question, map them onto a paragraph, and perform discrete operations over them (such as addition, counting, or sorting). Homepage: https://allenai.org/data/drop Acknowledgement: This implementation is based on the official evaluation for `DROP`: https://github.com/allenai/allennlp-reading-comprehension/blob/master/allennlp_rc/eval/drop_eval.py ### Citation ``` @misc{dua2019drop, title={DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs}, author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner}, year={2019}, eprint={1903.00161}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups * Not part of a group yet. #### Tasks * `drop` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/drop/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/drop/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1856 }
# EQ-Bench Title: `EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models` Abstract: https://arxiv.org/abs/2312.06281 EQ-Bench is a benchmark for language models designed to assess emotional intelligence. Why emotional intelligence? One reason is that it represents a subset of abilities that are important for the user experience, and which isn't explicitly tested by other benchmarks. Another reason is that it's not trivial to improve scores by fine tuning for the benchmark, which makes it harder to "game" the leaderboard. EQ-Bench is a little different from traditional psychometric tests. It uses a specific question format, in which the subject has to read a dialogue then rate the intensity of possible emotional responses of one of the characters. Every question is interpretative and assesses the ability to predict the magnitude of the 4 presented emotions. The test is graded without the need for a judge (so there is no length bias). It's cheap to run (only 171 questions), and produces results that correlate strongly with human preference (Arena ELO) and multi-domain benchmarks like MMLU. Homepage: https://eqbench.com/ NOTE: There are some key differences between the lm-evaluation-harness version and the implementation described in the EQ-Bench paper (These have been OK'd by the author): - The lm-eval version uses the EQ-Bench v2 test set (171 questions) and score calculation. It does not incorporate the revision part of the prompt, as per v2.1 (https://github.com/EQ-bench/EQ-Bench) - No retries in lm-eval version (EQ-Bench pipeline retries with successively higher temps if it encounters unparsable answers) - In the original implementation, unparsable answers are excluded from the final score, and 83% of answers have to be parseable or a fail is returned. The lm-eval version instead assigns 0 to unparsable answers and has no fail criteria. So for lower performing models, there may be differences with the EQ-Bench leaderboard. ### Citation ```bibtex @misc{paech2023eqbench, title={EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models}, author={Samuel J. Paech}, year={2023}, eprint={2312.06281}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups * Not part of a group yet #### Tasks * `eq_bench` ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2946 }
# EusExams ### Paper Title: Latxa: An Open Language Model and Evaluation Suite for Basque Abstract: https://arxiv.org/abs/2403.20266 EusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidetza, the Basque Government, the City Councils of Bilbao and Gasteiz, and the University of the Basque Country (UPV/EHU). Within each of these groups, there are different exams for public positions, such as administrative and assistant roles. Each multiple-choice question contains 2 to 4 choices (3.90 on average) and one correct answer. The dataset is mostly parallel with 16k questions in Basque and 18k in Spanish. Homepage: https://github.com/hitz-zentroa/latxa ### Citation ``` @misc{etxaniz2024latxa, title={Latxa: An Open Language Model and Evaluation Suite for Basque}, author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, year={2024}, eprint={2403.20266}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Tags * `eus_exams_eu`: The Basque version of the exams. * `eus_exams_es`: The Spanish version of the exams. #### Tasks Basque and Spanish versions of the exams are available as separate tasks starting with `eus_exams_eu` and `eus_exams_es` respectively. ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_exams/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_exams/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2148 }
# EusProficiency ### Paper Title: Latxa: An Open Language Model and Evaluation Suite for Basque Abstract: https://arxiv.org/abs/2403.20266 EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. We collected the atarikoa exercises from EGA exams through the years 1998 to 2008. Atarikoa is the first qualifying test of EGA, which measures different aspects of language competency, such as reading comprehension, grammar, vocabulary, spelling, and writing. Each test generally has 85 multiple-choice questions, with 4 choices and a single correct answer. Homepage: https://github.com/hitz-zentroa/latxa ### Citation ``` @misc{etxaniz2024latxa, title={Latxa: An Open Language Model and Evaluation Suite for Basque}, author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, year={2024}, eprint={2403.20266}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups There are no groups. #### Tasks * `eus_proficiency`: EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2003 }
# EusReading ### Paper Title: Latxa: An Open Language Model and Evaluation Suite for Basque Abstract: https://arxiv.org/abs/2403.20266 EusReading consists of 352 reading comprehension exercises (irakurmena) sourced from the set of past EGA exams from 1998 to 2008. Each test generally has 10 multiple-choice questions, with 4 choices and a single correct answer. These exercises are more challenging than Belebele due to the complexity and length of the input texts. As a result, EusReading is useful to measure long context understanding of models. Homepage: https://github.com/hitz-zentroa/latxa ### Citation ``` @misc{etxaniz2024latxa, title={Latxa: An Open Language Model and Evaluation Suite for Basque}, author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, year={2024}, eprint={2403.20266}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups There are no groups. #### Tasks * `eus_reading`: EusReading consists of 352 reading comprehension exercises (irakurmena) sourced from the set of past EGA exams from 1998 to 2008. ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_reading/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_reading/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1897 }
# EusTrivia ### Paper Title: Latxa: An Open Language Model and Evaluation Suite for Basque Abstract: https://arxiv.org/abs/2403.20266 EusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3\% of the questions are elementary level (grades 3-6), while the rest are considered challenging. A significant portion of the questions focus specifically on the Basque Country, its language and culture. Each multiple-choice question contains two, three or four choices (3.84 on average) and a single correct answer. Five areas of knowledge are covered: - **Humanities and Natural Sciences** (27.8%): This category encompasses questions about history, geography, biology, ecology and other social and natural sciences. - **Leisure and Art** (24.5%): This category includes questions on sports and athletes, performative and plastic arts and artists, architecture, cultural events, and related topics. - **Music** (16.0%): Here are grouped all the questions about music and musicians, both classical and contemporary. - **Language and Literature** (17.1%): This category is concerned with all kinds of literature productions and writers, as well as metalinguistic questions (e.g., definitions, synonyms, and word usage). - **Mathematics and ICT** (14.5%): This category covers mathematical problems and questions about ICT, as well as questions about people known for their contributions to these fields of knowledge. Homepage: https://github.com/hitz-zentroa/latxa ### Citation ``` @misc{etxaniz2024latxa, title={Latxa: An Open Language Model and Evaluation Suite for Basque}, author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, year={2024}, eprint={2403.20266}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups There are no groups. #### Tasks * `eus_trivia`: EusTrivia consists of 1,715 trivia questions from multiple online sources. ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2723 }
# FDA ### Paper Title: Language Models Enable Simple Systems For Generating Structured Views Of Heterogenous Data Lakes Abstract: A long standing goal of the data management community is to develop general, automated systems that ingest semi-structured documents and output queryable tables without human effort or domain specific customization. Given the sheer variety of potential documents, state-of-the art systems make simplifying assumptions and use domain specific training. In this work, we ask whether we can maintain generality by using large language models (LLMs). LLMs, which are pretrained on broad data, can perform diverse downstream tasks simply conditioned on natural language task descriptions. We propose and evaluate EVAPORATE, a simple, prototype system powered by LLMs. We identify two fundamentally different strategies for implementing this system: prompt the LLM to directly extract values from documents or prompt the LLM to synthesize code that performs the extraction. Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap, but far less accurate than directly processing each document with the LLM. To improve quality while maintaining low cost, we propose an extended code synthesis implementation, EVAPORATE-CODE+, which achieves better quality than direct extraction. Our key insight is to generate many candidate functions and ensemble their extractions using weak supervision. EVAPORATE-CODE+ not only outperforms the state-of-the art systems, but does so using a sublinear pass over the documents with the LLM. This equates to a 110× reduction in the number of tokens the LLM needs to process, averaged across 16 real-world evaluation settings of 10k documents each. A task for LMs to perform Information Extraction, as implemented by Based. Homepage: https://github.com/HazyResearch/based-evaluation-harness Description: > FDA (Information Extraction). The task is to extract key-value pairs from a set of PDFs scraped from the FDA website. We use the dataset and labels collected in Arora et al. 2023. We break apart the documents into chunks of 1,920 tokens. For every key-value pair that appears in the chunk, we create a zero-shot prompt using the simple prompt template: {chunk} \n {key}: We allow the model to generate a fixed number of tokens after the prompt and check (with case insensitivity) if the value is contained within the generation. We report accuracy, the fraction of prompts for which the generation contains the value. ### Citation ``` @misc{arora2024simple, title={Simple linear attention language models balance the recall-throughput tradeoff}, author={Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher Ré}, year={2024}, eprint={2402.18668}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{arora2023language, title={Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes}, author={Simran Arora and Brandon Yang and Sabri Eyuboglu and Avanika Narayan and Andrew Hojel and Immanuel Trummer and Christopher Ré}, year={2023}, eprint={2304.09433}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Tasks * `fda`: the FDA task as implemented in the paper "Simple linear attention language models balance the recall-throughput tradeoff". Designed for zero-shot evaluation of small LMs. ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/fda/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/fda/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4225 }
# FLD ### Paper Title: Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic Abstract: https://arxiv.org/abs/2308.07336 **FLD** (**F**ormal **L**ogic **D**eduction) is a deductive reasoning benchmark. Given a set of facts and a hypothesis, an LLM is required to generate (i) proof steps to (dis-)prove the hypothesis, and (ii) an answer ("proved", "disproved" or unknown"). Unique features of FLD are: * It assesses the model's logical reasoning ability *isolated from knowledge*, as the facts are randomly constructed so that referring to existing knowledge never helps solve the task. * It assesses diverse reasoning patterns (i.e., deduction rules), as it is based on formal logic theory. * As a result, it is highly challenging. Indeed, even GPT-4 can solve only about half of the problems. Homepage: https://github.com/hitachi-nlp/FLD ### Citation ``` @InProceedings{pmlr-v202-morishita23a, title = {Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic}, author = {Morishita, Terufumi and Morio, Gaku and Yamaguchi, Atsuki and Sogawa, Yasuhiro}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {25254--25274}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/morishita23a/morishita23a.pdf}, url = {https://proceedings.mlr.press/v202/morishita23a.html}, } ``` ### Groups and Tasks This release is the simplified version of FLD where a model is required to predict only an answer. This setting is described by "answer accuracy" in the original paper. #### Tasks in Group `fld` * `fld_default` is a basic task based on [FLD.v2](https://huggingface.co/datasets/hitachi-nlp/FLD.v2/viewer/star) * `fld_star`: is a more challenging version based on [FLD.v2-star](https://huggingface.co/datasets/hitachi-nlp/FLD.v2/viewer/star) #### Tasks in Group `fld_logical_formula` Further, we have "logical formula" versions of the benchmarks, which evaluate LLMs' pure logical reasoning capabilities within the domain of logical formulas, rather than natural language: * `fld_logical_formula_default` * `fld_logical_formula_fld_star` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/fld/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/fld/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3101 }
# FrenchBench ### Paper FrenchBench is a benchmark for evaluating French language models, introduced in the paper [CroissantLLM: A Truly Bilingual French-English Language Model](https://arxiv.org/abs/2402.00786). It is a collection of tasks that evaluate the ability of a language model to understand and generate French text. This benchmark is constructed both from openly available datasets, as well as newly released manually annotated data. ### Citation ```bibtex @misc{faysse2024croissantllm, title={CroissantLLM: A Truly Bilingual French-English Language Model}, author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo}, year={2024}, eprint={2402.00786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups, Tags, and Tasks #### Tags - `french_bench`: All tasks (non-perplexity based) - `french_bench_gen`: All official generative tasks - `french_bench_mc`: All official multiple choice tasks - `french_bench_perplexity`: All perplexity-based tasks (0 shot is recommended) - `french_bench_extra`: All extra tasks #### Tasks The following tasks evaluate tasks on the French Bench dataset using various scoring methods. - french_bench_boolqa - french_bench_fquadv2 - french_bench_fquadv2_bool - french_bench_fquadv2_genq - french_bench_fquadv2_hasAns - french_bench_topic_based_nli - french_bench_multifquad - french_bench_grammar - french_bench_vocab - french_bench_reading_comp - french_bench_xnli (modified XNLI) - french_bench_orangesum_abstract - french_bench_orangesum_title - french_bench_trivia - french_bench_hellaswag - french_bench_arc_challenge The french bench also includes other tasks from various benchmarks: - `belebele_fra_Latn`: Belebele French - `wmt14-en-fr`: WMT14 English-French - `wmt14-fr-en`: WMT14 French-English # Not to use in few-shot - `crows_pairs_french`: Crows Pairs French - `french_bench_opus_perplexity`: Opus Perplexity ### Usage ```bash # openai lm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench --limit 100 --num_fewshot 3 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench_3shot.json lm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench_opus_perplexity,crows_pairs_french --limit 100 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench2_0shot.json lm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 8 --output_path data/french_bench/gpt2/results_french_bench_3shot.json lm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/gpt2/results_french_bench2_0shot.json lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 4 --output_path data/french_bench/llama-2-7b-hf/results_french_bench_3shot.json lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/llama-2-7b-hf/results_french_bench2_0shot.json ``` HF and Accelerate options can be added when loading a model: ```bash accelerate launch -m lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf,dtype="float16" --tasks french_bench ``` ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? * [x] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4402 }
# GalicianBench ### Paper GalicianBench is a benchmark for evaluating language models in Galician tasks. This is, it evaluates the ability of a language model to understand and generate Galician text. GalicianBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All the details of GalicianBench will be published in a paper soon. The new evaluation datasets included in GalicianBench are: | Task | Category | Homepage | |:-------------:|:-----:|:-----:| | Belebele_gl | Reading Comprehension | https://huggingface.co/datasets/proxectonos/belebele_gl | | GalCoLA | Linguistic Acceptability | https://huggingface.co/datasets/proxectonos/galcola | | MGSM_ca | Math | https://huggingface.co/datasets/proxectonos/mgsm_gl | | Parafrases_gl | Paraphrasing | https://huggingface.co/datasets/proxectonos/parafrases_gl | | PAWS-gl | Paraphrasing | https://huggingface.co/datasets/proxectonos/PAWS-gl | | OpenBookQA_gl | Question Answering | https://huggingface.co/datasets/proxectonos/openbookqa_gl | | Summarization_gl | Summarization | https://huggingface.co/datasets/proxectonos/summarization_gl | | TruthfulQA_gl | Truthfulness | https://huggingface.co/datasets/proxectonos/truthfulqa_gl | | xnli_gl | NLI | https://huggingface.co/datasets/proxectonos/xnli_gl | | xstorycloze_gl | Commonsense Reasoning | https://huggingface.co/datasets/proxectonos/xstorycloze_gl | The datasets included in GalicianBench that have been made public in previous pubications are: | Task | Category | Paper title | Homepage | |:-------------:|:-----:|:-------------:|:-----:| | FLORES_gl | Translation | [The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation](https://arxiv.org/abs/2106.03193) | https://huggingface.co/datasets/facebook/flores | ### Citation Paper for GalicianBench coming soon. ### Groups and Tasks #### Groups - `galician_bench`: All tasks included in GalicianBench. - `flores_gl`: All FLORES translation tasks from or to Galician. #### Tasks The following tasks evaluate tasks on GalicianBench dataset using various scoring methods. - `belebele_glg_Latn` - `flores_gl` - `flores_gl-ca` - `flores_gl-de` - `flores_gl-en` - `flores_gl-es` - `flores_gl-eu` - `flores_gl-fr` - `flores_gl-it` - `flores_gl-pt` - `flores_ca-gl` - `flores_de-gl` - `flores_en-gl` - `flores_es-gl` - `flores_eu-gl` - `flores_fr-gl` - `flores_it-gl` - `flores_pt-gl` - `galcola` - `summarization_gl` - `parafrases_gl` - `paws_gl` - `openbookqa_gl` - `mgsm_direct_gl` - `truthfulqa_gl` - `xnli_gl` - `xstorycloze_gl` ### Checklist * [x] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? * [ ] Yes, original implementation contributed by author of the benchmark If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/galician_bench/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/galician_bench/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3293 }
# Glianorex The goal of this benchmark is to isolate the test answering capabilities from the content knowledge. ### Paper Title: Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data Abstract: https://arxiv.org/abs/2406.02394 To test the relevance of MCQs to assess LLM performance without prior data exposure, we created a fictional medical benchmark and knowledge base on a non-existent gland, the Glianorex. Using GPT-4 we generated a comprehensive textbook on the Glianorex in both English and French, and created multiple-choice questions in both English and French. ### Tasks All tasks are multiple choice questions with 4 options, only one correct option. - `glianorex`: Evaluates all tasks listed below. - `glianorex_en`: Evaluates the accuracy on 264 questions in English. - `glianorex_fr`: Evaluates the accuracy on 264 questions in French. #### Change Log * (all tasks) 2024-09-23 -- 1.0 * Switched the `test_split` from `train` to `test`.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/glianorex/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/glianorex/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1005 }
# GLUE **NOTE**: GLUE benchmark tasks do not provide publicly accessible labels for their test sets, so we default to the validation sets for all sub-tasks. ### Paper Title: `GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding` Abstract: https://openreview.net/pdf?id=rJ4km2R5t7 The General Language Understanding Evaluation (GLUE) benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems. GLUE consists of: - A benchmark of nine sentence- or sentence-pair language understanding tasks built on established existing datasets and selected to cover a diverse range of dataset sizes, text genres, and degrees of difficulty, and - A diagnostic dataset designed to evaluate and analyze model performance with respect to a wide range of linguistic phenomena found in natural language. Homepage: https://gluebenchmark.com/ ### Citation ``` @inproceedings{wang-etal-2018-glue, title = "{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding", author = "Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel", booktitle = "Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}", month = nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W18-5446", doi = "10.18653/v1/W18-5446", pages = "353--355", abstract = "Human ability to understand language is \textit{general, flexible, and robust}. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE, gluebenchmark.com): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it is limited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.", } ``` ### Groups, Tags, and Tasks #### Groups None. #### Tags * `glue`: Run all Glue subtasks. #### Tasks * `cola` * `mnli` * `mrpc` * `qnli` * `qqp` * `rte` * `sst` * `wnli` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/glue/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/glue/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4053 }
# GPQA ### Paper Title: GPQA: A Graduate-Level Google-Proof Q&A Benchmark Abstract: https://arxiv.org/abs/2311.12022 We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are “Google-proof”). The questions are also difficult for state-of-the-art AI systems, with our strongest GPT-4–based baseline achieving 39% accuracy. If we are to use future AI systems to help us answer very hard questions—for example, when developing new scientific knowledge—we need to develop *scalable oversight* methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA both for skilled non-experts and frontier AI systems should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities. Homepage: `https://github.com/idavidrein/gpqa/tree/main` ### Citation ``` @misc{rein2023gpqa, title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark}, author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman}, year={2023}, eprint={2311.12022}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` This dataset is gated, so you will have to accept the terms of use at https://huggingface.co/datasets/Idavidrein/gpqa and login via `huggingface-cli login` using your HF Hub token before running this task. ### Groups, Tags, and Tasks #### Groups None #### Tags * `gpqa`: runs all GPQA variants. #### Tasks * `gpqa_{main, diamond, extended}_zeroshot` * `gpqa_{main, diamond, extended}_n_shot` * `gpqa_{main, diamond, extended}_generative_n_shot` * `gpqa_{main, diamond, extended}_cot_zeroshot` * `gpqa_{main, diamond, extended}_cot_n_shot` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/gpqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gpqa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3062 }
# GSM8k ## Paper Training Verifiers to Solve Math Word Problems https://arxiv.org/abs/2110.14168 State-of-the-art language models can match human performance on many tasks, but they still struggle to robustly perform multi-step mathematical reasoning. To diagnose the failures of current models and support research, we introduce GSM8K, a dataset of 8.5K high quality linguistically diverse grade school math word problems. We find that even the largest transformer models fail to achieve high test performance, despite the conceptual simplicity of this problem distribution. NOTE: See the official implementation of the task: https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for how to make use of the dataset's calculator annotations in your language model's sample/generation function. Homepage: https://github.com/openai/grade-school-math ## Citation ``` @misc{cobbe2021training, title={Training Verifiers to Solve Math Word Problems}, author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman}, year={2021}, eprint={2110.14168}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` ### Groups and Tasks #### Groups - `math_word_problems` - `chain_of_thought` - `self_consistency` #### Tasks - `gsm8k_yaml` - `gsm8k_cot`: GSM8K with Chain-of-Thought - `gsm8k_cot_self_consistency`: GSM8K with Chain-of-Thought and Self-Consistency - `gsm8k_cot_llama`: GSM8K with prompt formatting modified to conform to the evaluation settings described by Meta here: https://huggingface.co/datasets/meta-llama/Meta-Llama-3.1-8B-Instruct-evals/viewer/Meta-Llama-3.1-8B-Instruct-evals__gsm8k__details?row=0 - Use this task with --fewshot_as_multiturn and --apply_chat_template to replicate Meta's reported performance. ### Checklist - [x] Is in Eval-harness v1.0 ? - [ ] Has been checked for regression from v1.0? - [ ] Has been checked for equivalence with original paper methodology? - [ ] "Main" checked variant clearly denoted? ### Variant Wishlist - [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation) - [ ] Using Verifiers - [ ] Majority voting "without CoT"
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2325 }
# gsm_plus ### Paper Title: `GSM-PLUS: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers` Abstract: `Large language models (LLMs) have achieved impressive performance across various mathematical reasoning benchmarks. However, there are increasing debates regarding whether these models truly understand and apply mathematical knowledge or merely rely on shortcuts for mathematical reasoning. One essential and frequently occurring evidence is that when the math questions are slightly changed, LLMs can behave incorrectly. This motivates us to evaluate the robustness of LLMs’ math reasoning capability by testing a wide range of question variations. We introduce the adversarial grade school math (GSM-PLUS) dataset, an extension of GSM8K augmented with various mathematical perturbations. Our experiments on 25 LLMs and 4 prompting techniques show that while LLMs exhibit different levels of math reasoning abilities, their performances are far from robust. In particular, even for problems that have been solved in GSM8K, LLMs can make mistakes when new statements are added or the question targets are altered. We also explore whether more robust performance can be achieved by composing existing prompting methods, in which we try an iterative method that generates and verifies each intermediate thought based on its reasoning goal and calculation result.` Homepage: https://huggingface.co/datasets/qintongli/GSM-Plus ### Citation ```bibtex @misc{li2024gsmpluscomprehensivebenchmarkevaluating, title={GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers}, author={Qintong Li and Leyang Cui and Xueliang Zhao and Lingpeng Kong and Wei Bi}, year={2024}, eprint={2402.19255}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2402.19255}, } ``` ### Groups and Tasks #### Groups * Not part of a group yet #### Tasks The following tasks evaluate subjects in the gsm_plus dataset - `gsm_plus` - `gsm_plus_mini` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2764 }
# HAE-RAE BENCH ### Paper Title: `HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models` Abstract: `Large Language Models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the attention given to their evaluation methodologies remains limited. Current multilingual benchmarks often rely on back translations or re-implementations of English tests, limiting their capacity to capture unique cultural and linguistic nuances. To bridge this gap for the Korean language, we introduce HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth. The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension. Contrary to traditional evaluation suites focused on token or sequence classification and specific mathematical or logical reasoning, HAE-RAE Bench emphasizes a model's aptitude for recalling Korean-specific knowledge and cultural contexts. Comparative analysis with prior Korean benchmarks indicates that the HAE-RAE Bench presents a greater challenge to non-native models, by disturbing abilities and knowledge learned from English being transferred.` Homepage: https://huggingface.co/datasets/HAERAE-HUB/HAE_RAE_BENCH ### Citation @misc{son2023haerae, title={HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models}, author={Guijin Son and Hanwool Lee and Suwan Kim and Huiseo Kim and Jaecheol Lee and Je Won Yeom and Jihyu Jung and Jung Woo Kim and Songseong Kim}, year={2023}, eprint={2309.02706}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Groups and Tasks #### Groups * `haerae`: 'It consists of five tasks provided in the HAERAE-BENCH paper. 'Reading Comprehension' was excluded from the implementation due to copyright issues. We will include it in the next haerae update. For other tasks, some part of data may be replaced or increased with the production of Haerae v1.1. Please note this when using it.' #### Tasks The following tasks evaluate subjects in the HaeRae dataset - `haerae_standard_nomenclature` - `haerae_loan_word` - `haerae_rare_word` - `haerae_general_knowledge` - `haerae_history` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/haerae/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/haerae/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3003 }
# HEAD-QA ### Paper HEAD-QA: A Healthcare Dataset for Complex Reasoning https://arxiv.org/pdf/1906.04701.pdf HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. They are designed by the Ministerio de Sanidad, Consumo y Bienestar Social. The dataset contains questions about the following topics: medicine, nursing, psychology, chemistry, pharmacology and biology. Homepage: https://aghie.github.io/head-qa/ ### Citation ``` @inproceedings{vilares-gomez-rodriguez-2019-head, title = "{HEAD}-{QA}: A Healthcare Dataset for Complex Reasoning", author = "Vilares, David and G{\'o}mez-Rodr{\'i}guez, Carlos", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1092", doi = "10.18653/v1/P19-1092", pages = "960--966", abstract = "We present HEAD-QA, a multi-choice question answering testbed to encourage research on complex reasoning. The questions come from exams to access a specialized position in the Spanish healthcare system, and are challenging even for highly specialized humans. We then consider monolingual (Spanish) and cross-lingual (to English) experiments with information retrieval and neural techniques. We show that: (i) HEAD-QA challenges current methods, and (ii) the results lag well behind human performance, demonstrating its usefulness as a benchmark for future work.", } ``` ### Groups and Tasks #### Groups - `headqa`: Evaluates `headqa_en` and `headqa_es` #### Tasks * `headqa_en` - English variant of HEAD-QA * `headqa_es` - Spanish variant of HEAD-QA ### Checklist * [x] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?\ * [x] Same as LM Evaluation Harness v0.3.0 implementation
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/headqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/headqa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2581 }
# HellaSwag ### Paper Title: `HellaSwag: Can a Machine Really Finish Your Sentence?` Abstract: https://arxiv.org/abs/1905.07830 Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models. Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges. Homepage: `https://rowanzellers.com/hellaswag/` ### Citation ``` @inproceedings{zellers2019hellaswag, title={HellaSwag: Can a Machine Really Finish Your Sentence?}, author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin}, booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, year={2019} } ``` ### Groups and Tasks #### Groups - Not part of a group yet #### Tasks - `hellaswag` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2709 }
# ETHICS Dataset ### Paper Pointer Sentinel Mixture Models https://arxiv.org/pdf/1609.07843.pdf The ETHICS dataset is a benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality. Models predict widespread moral judgments about diverse text scenarios. This requires connecting physical and social world knowledge to value judgements, a capability that may enable us to steer chatbot outputs or eventually regularize open-ended reinforcement learning agents. Homepage: https://github.com/hendrycks/ethics ### Citation ``` @article{hendrycks2021ethics title={Aligning AI With Shared Human Values}, author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt}, journal={Proceedings of the International Conference on Learning Representations (ICLR)}, year={2021} } ``` ### Groups and Tasks #### Groups - `hendrycks_ethics` #### Tasks * `ethics_cm` * `ethics_deontology` * `ethics_justice` * `ethics_utilitarianism` * (MISSING) `ethics_utilitarianism_original` * `ethics_virtue` ### Checklist * [x] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant? * [ ] Matches v0.3.0 of Eval Harness
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_ethics/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_ethics/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1767 }
# MATH ## Paper Measuring Mathematical Problem Solving With the MATH Dataset https://arxiv.org/abs/2103.03874 Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. NOTE: This task corresponds to the MATH (`hendrycks_math`) implementation at https://github.com/EleutherAI/lm-evaluation-harness/tree/master . For the variant which uses the custom 4-shot prompt in the Minerva paper (https://arxiv.org/abs/2206.14858), and SymPy answer checking as done by Minerva, see `lm_eval/tasks/minerva_math`. Homepage: https://github.com/hendrycks/math ## Citation ``` @article{hendrycksmath2021, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, journal={NeurIPS}, year={2021} } ``` ### Groups and Tasks #### Groups - `hendrycks_math`: the MATH benchmark from Hendrycks et al. 0- or few-shot. #### Tasks - `hendrycks_math_algebra` - `hendrycks_math_counting_and_prob` - `hendrycks_math_geometry` - `hendrycks_math_intermediate_algebra` - `hendrycks_math_num_theory` - `hendrycks_math_prealgebra` - `hendrycks_math_precalc` ### Checklist The checklist is the following: For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? * Answer extraction code is taken from the original MATH benchmark paper's repository. If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_math/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_math/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2347 }
# IFEval ### Paper Title: Instruction-Following Evaluation for Large Language Models Abstract: https://arxiv.org/abs/2311.07911 One core capability of Large Language Models (LLMs) is to follow natural language instructions. However, the evaluation of such abilities is not standardized: Human evaluations are expensive, slow, and not objectively reproducible, while LLM-based auto-evaluation is potentially biased or limited by the ability of the evaluator LLM. To overcome these issues, we introduce Instruction-Following Eval (IFEval) for large language models. IFEval is a straightforward and easy-to-reproduce evaluation benchmark. It focuses on a set of "verifiable instructions" such as "write in more than 400 words" and "mention the keyword of AI at least 3 times". We identified 25 types of those verifiable instructions and constructed around 500 prompts, with each prompt containing one or more verifiable instructions. We show evaluation results of two widely available LLMs on the market. Our code and data can be found at https://github.com/google-research/google-research/tree/master/instruction_following_eval Homepage: https://github.com/google-research/google-research/tree/master/instruction_following_eval ### Citation ``` @article{zhou2023instructionfollowing, title={Instruction-Following Evaluation for Large Language Models}, author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou}, journal={arXiv preprint arXiv:2311.07911}, year={2023}, } ``` ### Groups and Tasks #### Groups * Not part of a group yet #### Tasks * `ifeval` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/ifeval/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/ifeval/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2325 }
# inverse_scaling ### Paper Title: `Inverse Scaling: When Bigger Isn't Better` Abstract: `Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at this https URL to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.` Note: This is not official implementation of inverse scaling prize. Implemented by h-albert-lee with permission from the authors of the paper. Homepage: https://github.com/inverse-scaling/prize ### Citation @article{mckenzie2023inverse, title={Inverse Scaling: When Bigger Isn't Better}, author={Ian R. McKenzie and Alexander Lyzhov and Michael Pieler and Alicia Parrish and Aaron Mueller and Ameya Prabhu and Euan McLean and Aaron Kirtland and Alexis Ross and Alisa Liu and Andrew Gritsevskiy and Daniel Wurgaft and Derik Kauffman and Gabriel Recchia and Jiacheng Liu and Joe Cavanagh and Max Weiss and Sicong Huang and The Floating Droid and Tom Tseng and Tomasz Korbak and Xudong Shen and Yuhui Zhang and Zhengping Zhou and Najoung Kim and Samuel R. Bowman and Ethan Perez}, journal={arXiv preprint arXiv:2306.09479}, year={2023} } ### Groups and Tasks #### Groups * `inverse_scaling_mc`: all tasks of Inverse Scaling Prize (currently aside from Prompt Injection), matching their implementations on OPT for multiple-choice type classification tasks. **These match the published dataset versions from the prize, which may slightly differ from numbers in the paper (but have been tested for equivalence to the OPT numbers reported at https://huggingface.co/inverse-scaling/opt-1.3b_eval for multiple sizes.** #### Tasks - `inverse_scaling_hindsight_neglect_10shot` - `inverse_scaling_redefine_math` - `inverse_scaling_quote_repetition` - `inverse_scaling_neqa` - `inverse_scaling_winobias_antistereotype`: not an official Inverse Scaling prize winner, but eval results reported on it at https://huggingface.co/inverse-scaling/opt-1.3b_eval . - `inverse_scaling_into_the_unknown` - `inverse_scaling_memo_trap` - `inverse_scaling_modus_tollens` - `inverse_scaling_pattern_matching_suppression` - `inverse_scaling_repetitive_algebra` - `inverse_scaling_sig_figs` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/inverse_scaling/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/inverse_scaling/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 4207 }
# k_mmlu ### Paper Title: `KMMLU : Measuring Massive Multitask Language Understanding in Korean` Abstract: `We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM. Unlike previous Korean benchmarks that are translated from existing English benchmarks, KMMLU is collected from original Korean exams, capturing linguistic and cultural aspects of the Korean language. We test 26 publicly available and proprietary LLMs, identifying significant room for improvement. The best publicly available model achieves 50.54% on KMMLU, far below the average human performance of 62.6%. This model was primarily trained for English and Chinese, not Korean. Current LLMs tailored to Korean, such as Polyglot-Ko, perform far worse. Surprisingly, even the most capable proprietary LLMs, e.g., GPT-4 and HyperCLOVA X, achieve 59.95% and 53.40%, respectively. This suggests that further work is needed to improve Korean LLMs, and KMMLU offers the right tool to track this progress. We make our dataset publicly available on the Hugging Face Hub and integrate the benchmark into EleutherAI's Language Model Evaluation Harness.` Note: lm-eval-harness is using the micro average as the default. To replicate the test results in the paper, take the macro average for the scores evaluated with lm-eval-harness Homepage: https://huggingface.co/datasets/HAERAE-HUB/KMMLU ### Citation @article{son2024kmmlu, title={KMMLU: Measuring Massive Multitask Language Understanding in Korean}, author={Guijin Son and Hanwool Lee and Sungdong Kim and Seungone Kim and Niklas Muennighoff and Taekyoon Choi and Cheonbok Park and Kang Min Yoo and Stella Biderman}, journal={arXiv preprint arXiv:2402.11548}, year={2024} } ### Groups and Tasks #### Groups * `kmmlu`: 'All 45 subjects of the KMMLU dataset, evaluated following the methodology in MMLU's original implementation' * `kmmlu_direct`: 'kmmlu_direct solves questions using a straightforward *generative* multiple-choice question-answering approach' * `kmmlu_hard`: 'kmmlu_hard comprises difficult questions that at least one proprietary model failed to answer correctly using log-likelihood approach' * `kmmlu_hard_direct`: 'kmmlu_hard_direct solves questions of kmmlu_hard using direct(generative) approach' * `kmmlu_hard_cot`: 'kmmlu_hard_cot includes 5-shot of exemplars for chain-of-thought approach' #### Tasks The following tasks evaluate subjects in the KMMLU dataset - `kmmlu_direct_{subject_english}` The following tasks evaluate subjects in the KMMLU-Hard dataset - `kmmlu_hard_{subject_english}` - `kmmlu_hard_cot_{subject_english}` - `kmmlu_hard_direct_{subject_english}` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/kmmlu/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kmmlu/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 3408 }
# LAMBADA ### Paper Title: `KOBEST: Korean Balanced Evaluation of Significant Tasks` Abstract: https://arxiv.org/abs/2204.04541 A well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. As modern language models (LMs) have become more elaborate and sophisticated, more difficult benchmarks that require linguistic knowledge and reasoning have been proposed. However, most of these benchmarks only support English, and great effort is necessary to construct benchmarks for other low resource languages. To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge. Moreover, our data is purely annotated by humans and thoroughly reviewed to guarantee high data quality. We also provide baseline models and human performance results. Our dataset is available on the Huggingface. Homepage: https://huggingface.co/datasets/skt/kobest_v1 ### Groups and Tasks #### Groups - `kobest` #### Tasks - `kobest_boolq` - `kobest_copa` - `kobest_hallawag` - `kobest_sentineg` - `kobest_wic` ### Citation @misc{ author={Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, Eric Davis}, title={KOBEST: Korean Balanced Evaluation of Significant Tasks}, DOI={https://doi.org/10.48550/arXiv.2204.04541}, publisher={arXiv}, year={2022}, month={Apr} }
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/kobest/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kobest/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1587 }
# KorMedMCQA ### Paper Title: `KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations` Abstract: `We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, covering from the year 2012 to year 2023. This dataset consists of a selection of questions from the license examinations for doctors, nurses, and pharmacists, featuring a diverse array of subjects. We conduct baseline experiments on various large language models, including proprietary/open-source, multilingual/Korean-additional pretrained, and clinical context pretrained models, highlighting the potential for further enhancements. We make our data publicly available on HuggingFace and provide a evaluation script via LM-Harness, inviting further exploration and advancement in Korean healthcare environments.` Paper : https://arxiv.org/abs/2403.01469 Homepage: https://huggingface.co/datasets/sean0042/KorMedMCQA ### Citation ``` @article{kweon2024kormedmcqa, title={KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations}, author={Sunjun Kweon and Byungjin Choi and Minkyu Kim and Rae Woong Park and Edward Choi}, journal={arXiv preprint arXiv:2403.01469}, year={2024} } ``` ### Groups and Tasks * `kormedmcqa`: Runs `kormedmcqa_doctor`, `kormedmcqa_nurse`, and `kormedmcqa_pharm`. #### Tasks * `kormedmcqa_doctor`: `Official Korean Doctor Examination` * `kormedmcqa_nurse`: `Official Korean Nurse Examination` * `kormedmcqa_pharm`: `Official Korean Pharmacist Examination` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/kormedmcqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kormedmcqa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2371 }
# LAMBADA ### Paper Title: `The LAMBADA dataset: Word prediction requiring a broad discourse context` Abstract: https://arxiv.org/pdf/1606.06031.pdf LAMBADA is a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. Homepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI ### Groups and Tasks #### Groups - `lambada` #### Tasks - `lambada_openai` - `lambada_standard` ### Citation @misc{ author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel}, title={The LAMBADA dataset}, DOI={10.5281/zenodo.2630551}, publisher={Zenodo}, year={2016}, month={Aug} }
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1183 }
# LAMBADA Cloze ### Paper Title: `The LAMBADA dataset: Word prediction requiring a broad discourse context` Abstract: https://arxiv.org/abs/1606.06031 Cloze-style LAMBADA dataset. LAMBADA is a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. Homepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI ### Citation ``` @misc{ author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel}, title={The LAMBADA dataset}, DOI={10.5281/zenodo.2630551}, publisher={Zenodo}, year={2016}, month={Aug} } ``` ### Groups and Tasks #### Groups * `lambada_cloze` #### Tasks * `lambada_openai_cloze_yaml` * `lambada_standard_cloze_yaml` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1931 }
# LAMBADA ### Paper The LAMBADA dataset: Word prediction requiring a broad discourse context https://arxiv.org/pdf/1606.06031.pdf LAMBADA is a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. Homepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI ### Citation @misc{ author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel}, title={The LAMBADA dataset}, DOI={10.5281/zenodo.2630551}, publisher={Zenodo}, year={2016}, month={Aug} } ### Groups and Tasks #### Groups * `lambada_multilingual`: Evaluates all `lambada_mt_X` tasks #### Tasks * `lambada_mt_{en, fr, de, it, es}`: Machine-translated versions of OpenAI's Lambada variant. ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? (This task is novel to the Evaluation Harness, and has been checked against v0.3.0 of the harness.) If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1992 }
# LAMBADA ### Paper The LAMBADA dataset: Word prediction requiring a broad discourse context https://arxiv.org/pdf/1606.06031.pdf LAMBADA is a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. Homepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI ### Citation @misc{ author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel}, title={The LAMBADA dataset}, DOI={10.5281/zenodo.2630551}, publisher={Zenodo}, year={2016}, month={Aug} } @article{bellagente2024stable, title={Stable LM 2 1.6 B Technical Report}, author={Bellagente, Marco and Tow, Jonathan and Mahan, Dakota and Phung, Duy and Zhuravinskyi, Maksym and Adithyan, Reshinth and Baicoianu, James and Brooks, Ben and Cooper, Nathan and Datta, Ashish and others}, journal={arXiv preprint arXiv:2402.17834}, year={2024} } ### Groups and Tasks #### Groups * `lambada_multilingual_stablelm`: Evaluates all `lambada_mt_stablelm_X` tasks #### Tasks * `lambada_mt_stablelm_{en, fr, de, it, es}`: Machine-translated versions of OpenAI's Lambada variant as reported in "Stable LM 2 1.6 B Technical Report" (Bellagente et. al.). ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? (This task is novel to the Evaluation Harness, and has been checked against v0.3.0 of the harness.) If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual_stablelm/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual_stablelm/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2445 }
# Leaderboard evaluations Our goal with this group is to create an unchanging through time version of evaluations that will power the Open LLM Leaderboard on HuggingFace. As we want to evaluate models across capabilities, the list currently contains: - BBH (3-shots, multichoice) - GPQA (0-shot, multichoice) - mmlu-pro (5-shots, multichoice) - Musr (0-shot, multichoice) - ifeval (0-shot, generative) - Math-lvl-5 (4-shots, generative, minerva version) Details on the choice of those evals can be found [here](https://huggingface.co/spaces/open-llm-leaderboard/blog) ! ## Install To install the `lm-eval` package with support for leaderboard evaluations, run: ```bash git clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e ".[math,ifeval,sentencepiece]" ``` ## BigBenchHard (BBH) A suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. ### Paper Title: Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them BIG-Bench (Srivastava et al., 2022) is a diverse evaluation suite that focuses on tasks believed to be beyond the capabilities of current language models. Language models have already made good progress on this benchmark, with the best model in the BIG-Bench paper outperforming average reported human-rater results on 65% of the BIG-Bench tasks via few-shot prompting. But on what tasks do language models fall short of average human-rater performance, and are those tasks actually unsolvable by current language models? In this work, we focus on a suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH). These are the task for which prior language model evaluations did not outperform the average human-rater. We find that applying chain-of-thought (CoT) prompting to BBH tasks enables PaLM to surpass the average human-rater performance on 10 of the 23 tasks, and Codex (code-davinci-002) to surpass the average human-rater performance on 17 of the 23 tasks. Since many tasks in BBH require multi-step reasoning, few-shot prompting without CoT, as done in the BIG-Bench evaluations (Srivastava et al., 2022), substantially underestimates the best performance and capabilities of language models, which is better captured via CoT prompting. As further analysis, we explore the interaction between CoT and model scale on BBH, finding that CoT enables emergent task performance on several BBH tasks with otherwise flat scaling curves. - paper: https://huggingface.co/papers/2210.09261 - Homepage: https://github.com/suzgunmirac/BIG-Bench-Hard ### Citation ``` @article{suzgun2022challenging, title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them}, author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason}, journal={arXiv preprint arXiv:2210.09261}, year={2022} } ``` ### Groups - `leaderboard_bbh` ### Tasks - `leaderboard_bbh_boolean_expressions` - `leaderboard_bbh_causal_judgement` - `leaderboard_bbh_date_understanding` - `leaderboard_bbh_disambiguation_qa` - `leaderboard_bbh_formal_fallacies` - `leaderboard_bbh_geometric_shapes` - `leaderboard_bbh_hyperbaton` - `leaderboard_bbh_logical_deduction_five_objects` - `leaderboard_bbh_logical_deduction_seven_objects` - `leaderboard_bbh_logical_deduction_three_objects` - `leaderboard_bbh_movie_recommendation` - `leaderboard_bbh_navigate` - `leaderboard_bbh_object_counting` - `leaderboard_bbh_penguins_in_a_table` - `leaderboard_bbh_reasoning_about_colored_objects` - `leaderboard_bbh_ruin_names` - `leaderboard_bbh_salient_translation_error_detection` - `leaderboard_bbh_snarks` - `leaderboard_bbh_sports_understanding` - `leaderboard_bbh_temporal_sequences` - `leaderboard_bbh_tracking_shuffled_objects_five_objects` - `leaderboard_bbh_tracking_shuffled_objects_seven_objects` - `leaderboard_bbh_tracking_shuffled_objects_three_objects` - `leaderboard_bbh_web_of_lies` ## GPQA ### Paper Title: GPQA: A Graduate-Level Google-Proof Q&A Benchmark We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are “Google-proof”). The questions are also difficult for state-of-the-art AI systems, with our strongest GPT-4–based baseline achieving 39% accuracy. If we are to use future AI systems to help us answer very hard questions—for example, when developing new scientific knowledge—we need to develop scalable oversight methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA both for skilled non-experts and frontier AI systems should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities. - Paper: https://huggingface.co/papers/2311.12022 - Homepage: https://github.com/idavidrein/gpqa/tree/main ### Citation ``` @misc{rein2023gpqa, title={GPQA: A Graduate-Level Google-Proof Q&A Benchmark}, author={David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman}, year={2023}, eprint={2311.12022}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ### Groups - `leaderboard_gpqa` ### Tasks - `leaderboard_gpqa_extended` - `leaderboard_gpqa_diamond` - `leaderboard_gpqa_main` ## IFEval ### Paper Title: Instruction-Following Evaluation for Large Language Models One core capability of Large Language Models (LLMs) is to follow natural language instructions. However, the evaluation of such abilities is not standardized: Human evaluations are expensive, slow, and not objectively reproducible, while LLM-based auto-evaluation is potentially biased or limited by the ability of the evaluator LLM. To overcome these issues, we introduce Instruction-Following Eval (IFEval) for large language models. IFEval is a straightforward and easy-to-reproduce evaluation benchmark. It focuses on a set of "verifiable instructions" such as "write in more than 400 words" and "mention the keyword of AI at least 3 times". We identified 25 types of those verifiable instructions and constructed around 500 prompts, with each prompt containing one or more verifiable instructions. We show evaluation results of two widely available LLMs on the market. - Paper: https://huggingface.co/papers/2210.09261 - Homepage: https://github.com/google-research/google-research/tree/master/instruction_following_eval ### Citation ``` @article{zhou2023instructionfollowing, title={Instruction-Following Evaluation for Large Language Models}, author={Jeffrey Zhou and Tianjian Lu and Swaroop Mishra and Siddhartha Brahma and Sujoy Basu and Yi Luan and Denny Zhou and Le Hou}, journal={arXiv preprint arXiv:2311.07911}, year={2023}, } ``` ### Tasks - `leaderboard_ifeval` ## MATH-hard This is the 4 shots variant of minerva math but only keeping the level 5 questions. ### Paper Title: Measuring Mathematical Problem Solving With the MATH Dataset Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. NOTE: The few-shot and the generated answer extraction is based on the [Minerva](https://arxiv.org/abs/2206.14858) and exact match equivalence is calculated using the `sympy` library. This requires additional dependencies, which can be installed via the `lm-eval[math]` extra. - Paper: https://huggingface.co/papers/2103.03874 - Homepage: https://github.com/hendrycks/math ### Citation ``` @article{hendrycksmath2021, title={Measuring Mathematical Problem Solving With the MATH Dataset}, author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, journal={NeurIPS}, year={2021} } @misc{2206.14858, Author = {Aitor Lewkowycz and Anders Andreassen and David Dohan and Ethan Dye and Henryk Michalewski and Vinay Ramasesh and Ambrose Slone and Cem Anil and Imanol Schlag and Theo Gutman-Solo and Yuhuai Wu and Behnam Neyshabur and Guy Gur-Ari and Vedant Misra}, Title = {Solving Quantitative Reasoning Problems with Language Models}, Year = {2022}, Eprint = {arXiv:2206.14858}, } ``` ### Groups - `leaderboard_math_hard` ### Tasks - `leaderboard_math_algebra_hard` - `leaderboard_math_counting_and_prob_hard` - `leaderboard_math_geometry_hard` - `leaderboard_math_intermediate_algebra_hard` - `leaderboard_math_num_theory_hard` - `leaderboard_math_prealgebra_hard` - `leaderboard_math_precalculus_hard` ## MMLU-Pro ### Paper Title: MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark In the age of large-scale language models, benchmarks like the Massive Multitask Language Understanding (MMLU) have been pivotal in pushing the boundaries of what AI can achieve in language comprehension and reasoning across diverse domains. However, as models continue to improve, their performance on these benchmarks has begun to plateau, making it increasingly difficult to discern differences in model capabilities. This paper introduces MMLU-Pro, an enhanced dataset designed to extend the mostly knowledge-driven MMLU benchmark by integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options. Additionally, MMLU-Pro eliminates the trivial and noisy questions in MMLU. Our experimental results show that MMLU-Pro not only raises the challenge, causing a significant drop in accuracy by 16% to 33% compared to MMLU but also demonstrates greater stability under varying prompts. With 24 different prompt styles tested, the sensitivity of model scores to prompt variations decreased from 4-5% in MMLU to just 2% in MMLU-Pro. Additionally, we found that models utilizing Chain of Thought (CoT) reasoning achieved better performance on MMLU-Pro compared to direct answering, which is in stark contrast to the findings on the original MMLU, indicating that MMLU-Pro includes more complex reasoning questions. Our assessments confirm that MMLU-Pro is a more discriminative benchmark to better track progress in the field. - Paper: https://huggingface.co/papers/2406.01574 - Homepage: https://huggingface.co/datasets/TIGER-Lab/MMLU-Pro ### Citation ``` @misc{wang2024mmluprorobustchallengingmultitask, title={MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark}, author={Yubo Wang and Xueguang Ma and Ge Zhang and Yuansheng Ni and Abhranil Chandra and Shiguang Guo and Weiming Ren and Aaran Arulraj and Xuan He and Ziyan Jiang and Tianle Li and Max Ku and Kai Wang and Alex Zhuang and Rongqi Fan and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2406.01574}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2406.01574}, } ``` ### Groups - `leaderboard_mmlu_pro` ### Tasks - `leaderboard_mmlu_pro` ## Musr ### Paper Title: MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning While large language models (LLMs) equipped with techniques like chain-of-thought prompting have demonstrated impressive capabilities, they still fall short in their ability to reason robustly in complex settings. However, evaluating LLM reasoning is challenging because system capabilities continue to grow while benchmark datasets for tasks like logical deduction have remained static. We introduce MuSR, a dataset for evaluating language models on multistep soft reasoning tasks specified in a natural language narrative. This dataset has two crucial features. First, it is created through a novel neurosymbolic synthetic-to-natural generation algorithm, enabling the construction of complex reasoning instances that challenge GPT-4 (e.g., murder mysteries roughly 1000 words in length) and which can be scaled further as more capable LLMs are released. Second, our dataset instances are free text narratives corresponding to real-world domains of reasoning; this makes it simultaneously much more challenging than other synthetically-crafted benchmarks while remaining realistic and tractable for human annotators to solve with high accuracy. We evaluate a range of LLMs and prompting techniques on this dataset and characterize the gaps that remain for techniques like chain-of-thought to perform robust reasoning. - Paper: https://huggingface.co/papers/2310.16049 - Homepage: https://zayne-sprague.github.io/MuSR/ ### Citation ``` @misc{sprague2024musrtestinglimitschainofthought, title={MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning}, author={Zayne Sprague and Xi Ye and Kaj Bostrom and Swarat Chaudhuri and Greg Durrett}, year={2024}, eprint={2310.16049}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2310.16049}, } ``` ### Groups - `leaderboard_musr` ### Tasks - `leaderboard_musr_murder_mysteries` - `leaderboard_musr_object_placements` - `leaderboard_musr_team_allocation`
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/leaderboard/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/leaderboard/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 14076 }
# LingOly ### Paper Title: `LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages` Abstract: `https://arxiv.org/abs/2406.06196` `In this paper, we present the LingOly benchmark, a novel benchmark for advanced reasoning abilities in large language models. Using challenging Linguistic Olympiad puzzles, we evaluate (i) capabilities for in-context identification and generalisation of linguistic patterns in very low-resource or extinct languages, and (ii) abilities to follow complex task instructions. The LingOly benchmark covers more than 90 mostly low-resource languages, minimising issues of data contamination, and contains 1,133 problems across 6 formats and 5 levels of human difficulty. We assess performance with both direct accuracy and comparison to a no-context baseline to penalise memorisation. Scores from 11 state-of-the-art LLMs demonstrate the benchmark to be challenging, and models perform poorly on the higher difficulty problems. On harder problems, even the top model only achieved 38.7% accuracy, 24.7% improvement over the no-context baseline. Large closed models typically outperform open models, and in general, the higher resource the language, the better the scores. These results indicate, in absence of memorisation, true multi-step out-of-domain reasoning remains a challenge for current language models.` Homepage: `https://github.com/am-bean/lingOly` ### Citation ``` @article{beanLINGOLYBenchmarkOlympiadLevel2024, title = {{LINGOLY}: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages}, shorttitle = {{LINGOLY}}, url = {http://arxiv.org/abs/2406.06196}, author = {Bean, Andrew M. and Hellsten, Simi and Mayne, Harry and Magomere, Jabez and Chi, Ethan A. and Chi, Ryan and Hale, Scott A. and Kirk, Hannah Rose}, month = jun, year = {2024}, keywords = {Computer Science - Computation and Language} } ``` ### Tasks * `lingoly`: `runs both _context and _nocontext and computes the difference` * `lingoly_context`: `exact match of generations to reference answers` * `lingoly_nocontext`: `exact match of generations to reference answers, but with context removed` ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/lingoly/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lingoly/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2897 }
# LogiQA ### Paper Title: `LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning` Abstract: https://arxiv.org/abs/2007.08124 LogiQA is a dataset for testing human logical reasoning. It consists of 8,678 QA instances, covering multiple types of deductive reasoning. Results show that state- of-the-art neural models perform by far worse than human ceiling. The dataset can also serve as a benchmark for reinvestigating logical AI under the deep learning NLP setting. Homepage: https://github.com/lgw863/LogiQA-dataset ### Citation ``` @misc{liu2020logiqa, title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning}, author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang}, year={2020}, eprint={2007.08124}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups * Not part of a group yet #### Tasks * `logiqa` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/logiqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/logiqa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1656 }
# LogiQA 2.0 ### Paper LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding https://ieeexplore.ieee.org/document/10174688 The dataset is an amendment and re-annotation of LogiQA in 2020, a large-scale logical reasoning reading comprehension dataset adapted from the Chinese Civil Service Examination. This new version has an increased data size, the texts are refined with manual translation by professionals, and improved by removing items with distinctive cultural features like Chinese idioms. Furthermore, a two-way natural language inference (NLI) task is introduced, resulting in 35k premise-hypothesis pairs with gold labels, making it the first large-scale NLI dataset for complex logical reasoning Homepage: https://github.com/csitfun/LogiQA2.0 ### Citation ```bibtex @ARTICLE{10174688, author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue}, journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding}, year={2023}, volume={}, number={}, pages={1-16}, doi={10.1109/TASLP.2023.3293046}} ``` ### Groups and Tasks #### Groups * Not part of a group yet #### Tasks * `logiqa2_zh`: The original dataset in Chinese. * `logiqa2_NLI`: The NLI version of the dataset converted from the MRC version. * `logieval`: Prompt based; https://github.com/csitfun/LogiEval NOTE! The subtasks have not been verified yet. ### Checklist * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [x] If yes, does the original paper provide a reference implementation? * [x] The original paper does not. There is another implementation of this task, but it designed for instruction tuned models: https://github.com/csitfun/LogiEval If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2239 }
# MathQA ### Paper MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms https://arxiv.org/pdf/1905.13319.pdf MathQA is a large-scale dataset of 37k English multiple-choice math word problems covering multiple math domain categories by modeling operation programs corresponding to word problems in the AQuA dataset (Ling et al., 2017). Homepage: https://math-qa.github.io/math-QA/ ### Citation ``` @misc{amini2019mathqa, title={MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms}, author={Aida Amini and Saadia Gabriel and Peter Lin and Rik Koncel-Kedziorski and Yejin Choi and Hannaneh Hajishirzi}, year={2019}, eprint={1905.13319}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Groups and Tasks #### Groups * `math_word_problems` #### Tasks * `mathqa`: The MathQA dataset, as a multiple choice dataset where the answer choices are not in context. ### Checklist For adding novel benchmarks/datasets to the library: * [x] Is the task an existing benchmark in the literature? * [x] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? * The MathQA dataset predates transformer-based prompted LLMs. We should, however, return to this task to ensure equivalence to the non-CoT version of mathQA used in the Chain-of-Thought paper. If other tasks on this dataset are already supported: * [x] Is the "Main" variant of this task clearly denoted? * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [x] Have you noted which, if any, published evaluation setups are matched by this variant? * [x] Checked for equivalence with v0.3.0 LM Evaluation Harness
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mathqa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mathqa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 1906 }
# MC Taco ### Paper Title: `"Going on a vacation" takes longer than "Going for a walk": A Study of Temporal Commonsense Understanding` Abstract: https://arxiv.org/abs/1909.03065 MC-TACO is a dataset of 13k question-answer pairs that require temporal commonsense comprehension. The dataset contains five temporal properties, (1) duration (how long an event takes), (2) temporal ordering (typical order of events), (3) typical time (when an event occurs), (4) frequency (how often an event occurs), and (5) stationarity (whether a state is maintained for a very long time or indefinitely). WARNING: Running this task with a `--limit` arg will give misleading results! The corresponding dataset is structured such that each multiple-choice-question gathered by the authors is split into question-option pairs, where each such pair gets siloed into an individual document for plausibility testing. Because the harness shuffles these documents, setting `--limit` will likely "cut off" certain candidate answers. This is a problem because the task's metrics require an exhaustive evaluation of a question's options. See section 4 of the paper for details. Homepage: https://leaderboard.allenai.org/mctaco/submissions/public ### Citation ``` BibTeX-formatted citation goes here ``` ### Groups and Tasks #### Groups * Not part of a group yet. #### Tasks * `mc_taco` ### Checklist For adding novel benchmarks/datasets to the library: * [ ] Is the task an existing benchmark in the literature? * [ ] Have you referenced the original paper that introduced the task? * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? If other tasks on this dataset are already supported: * [ ] Is the "Main" variant of this task clearly denoted? * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2051 }
# MedConceptsQA ### Paper Title: `MedConceptsQA: Open Source Medical Concepts QA Benchmark` Abstract: https://arxiv.org/abs/2405.07348 MedConceptsQA is a dedicated open source benchmark for medical concepts question answering. The benchmark comprises of questions of various medical concepts across different vocabularies: diagnoses, procedures, and drugs. The questions are categorized into three levels of difficulty: easy, medium, and hard. Our benchmark serves as a valuable resource for evaluating the abilities of Large Language Models to interpret medical codes and distinguish between medical concepts. ### Citation ``` @article{shoham2024medconceptsqa, title={MedConceptsQA--Open Source Medical Concepts QA Benchmark}, author={Shoham, Ofir Ben and Rappoport, Nadav}, journal={arXiv preprint arXiv:2405.07348}, year={2024} } ``` ### Groups and Tasks #### Groups * `med_concepts_qa`: Contains all the QA tasks (diagnosis, procedures ,and drugs). #### Tasks * `med_concepts_qa_icd9cm` - ICD9-CM (diagnosis codes, ICD9 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-9-CM (International Classification of Diseases, 9th Revision, Clinical Modification) diagnosis codes. * `med_concepts_qa_icd10cm` - ICD10-CM (diagnosis codes, ICD10 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-10-CM (International Classification of Diseases, 10th Revision, Clinical Modification) diagnosis codes. * `med_concepts_qa_icd9proc` - ICD9-Proc (procedure codes, ICD9 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-9-PCS (International Classification of Diseases, 9th Revision, Procedure Coding System) procedure codes. * `med_concepts_qa_icd10proc` - ICD10-Proc (procedure codes, ICD10 format) question-answering. This involves providing information, clarifications, and answering questions related to ICD-10-PCS (International Classification of Diseases, 10th Revision, Procedure Coding System) procedure codes. * `med_concepts_qa_atc` - ATC (Anatomical Therapeutic Chemical Classification System) question-answering. This involves providing information, clarifications, and answering questions related to the ATC classification system, which is used for the classification of drugs and other medical products according to the organ or system on which they act and their therapeutic, pharmacological, and chemical properties.
{ "source": "simplescaling/s1", "title": "eval/lm-evaluation-harness/lm_eval/tasks/med_concepts_qa/README.md", "url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/med_concepts_qa/README.md", "date": "2025-02-01T02:38:16", "stars": 5206, "description": "s1: Simple test-time scaling", "file_size": 2559 }