File size: 1,292 Bytes
23b40cf
 
 
 
 
8a3cec8
23b40cf
8a3cec8
23b40cf
8a3cec8
23b40cf
e8757e3
 
 
 
 
 
 
 
23b40cf
 
 
 
 
 
 
 
 
 
 
 
 
e8757e3
 
15e0962
23b40cf
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
configs:
- config_name: small
  data_files:
  - split: train
    path: "train-small.parquet"
  - split: val
    path: "val-small.parquet"
  - split: test
    path: "test-small.parquet"
  default: true
- config_name: large
  data_files:
  - split: train
    path: "train-large.parquet"
  - split: val
    path: "val-large.parquet"
  - split: test
    path: "test-large.parquet"
---

# ExcelFormer Benchmark

The datasets used in [ExcelFormer](https://arxiv.org/abs/2301.02819). The usage example is as follows:

```python
from datasets import load_dataset
import pandas as pd
import numpy as np

# process train split, similar to other splits
data = {}
datasets = load_dataset('jyansir/excelformer') # load 96 small-scale datasets in default
# datasets = load_dataset('jyansir/excelformer', 'large') # load 21 large-scale datasets with specification
dataset = datasets['train'].to_dict()
for table_name, table, task in zip(dataset['dataset_name'], dataset['table'], dataset['task']):
    data[table_name] = {
        'X_num': None if not table['X_num'] else pd.DataFrame.from_dict(table['X_num']),
        'X_cat': None if not table['X_cat'] else pd.DataFrame.from_dict(table['X_cat']),
        'y': np.array(table['y']),
        'y_info': table['y_info'],
        'task': task,
    }
```