Datasets:
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Tags:
benchmark
llm-evaluation
large-language-models
large-language-model
large-multimodal-models
llm-training
DOI:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -52,5 +52,18 @@ The MixEval consists of two files: `CleanResponses` and `AllResponses`. Below pr
|
|
52 |
└── CleanResponses.csv
|
53 |
└── KeyQuestions.csv
|
54 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
license: mit
|
56 |
---
|
|
|
52 |
└── CleanResponses.csv
|
53 |
└── KeyQuestions.csv
|
54 |
```
|
55 |
+
|
56 |
+
# Dataset Usage
|
57 |
+
An example use of the dataset using the datasets library is shown in https://github.com/cmudrc/MSEval
|
58 |
+
|
59 |
+
To use this dataset using pandas:
|
60 |
+
```
|
61 |
+
import pandas as pd
|
62 |
+
|
63 |
+
df = pd.read_csv("hf://datasets/cmudrc/Material_Selection_Eval/AllResponses.csv")
|
64 |
+
```
|
65 |
+
|
66 |
+
Replace AllResponses with CleanResponses and KeyQuestions in the pathname if required.
|
67 |
+
|
68 |
license: mit
|
69 |
---
|