excelformer / README.md
jyansir
update small-scale datasets
8a3cec8
metadata
configs:
  - config_name: small
    data_files:
      - split: train
        path: train-small.parquet
      - split: val
        path: val-small.parquet
      - split: test
        path: test-small.parquet
    default: true
  - config_name: large
    data_files:
      - split: train
        path: train-large.parquet
      - split: val
        path: val-large.parquet
      - split: test
        path: test-large.parquet

ExcelFormer Benchmark

The datasets used in ExcelFormer. The usage example is as follows:

from datasets import load_dataset
import pandas as pd
import numpy as np

# process train split, similar to other splits
data = {}
datasets = load_dataset('jyansir/excelformer') # load 96 small-scale datasets in default
# datasets = load_dataset('jyansir/excelformer', 'large') # load 21 large-scale datasets with specification
dataset = datasets['train'].to_dict()
for table_name, table, task in zip(dataset['dataset_name'], dataset['table'], dataset['task']):
    data[table_name] = {
        'X_num': None if not table['X_num'] else pd.DataFrame.from_dict(table['X_num']),
        'X_cat': None if not table['X_cat'] else pd.DataFrame.from_dict(table['X_cat']),
        'y': np.array(table['y']),
        'y_info': table['y_info'],
        'task': task,
    }