Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 7 new columns ({'url', 'label', 'node_id', 'raw_text', 'len', 'category', 'neighbor_ids'}) and 1 missing columns ({'name'}). This happened while the csv dataset builder was generating data using hf://datasets/Graph-COM/Text-Attributed-Graphs/cornell/Cornell.csv (at revision 155227006f3228e4d2d0605ac3c5247b3bbc5b51) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast url: string raw_text: string category: string node_id: int64 label: int64 len: int64 neighbor_ids: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1044 to {'name': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1415, in compute_config_parquet_and_info_response parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet( File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 991, in stream_convert_to_parquet builder._prepare_split( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 7 new columns ({'url', 'label', 'node_id', 'raw_text', 'len', 'category', 'neighbor_ids'}) and 1 missing columns ({'name'}). This happened while the csv dataset builder was generating data using hf://datasets/Graph-COM/Text-Attributed-Graphs/cornell/Cornell.csv (at revision 155227006f3228e4d2d0605ac3c5247b3bbc5b51) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
name
string |
---|
Literature & Fiction |
Animals |
Growing Up & Facts of Life |
Humor |
Cars Trains & Things That Go |
Fairy Tales Folk Tales & Myths |
Activities Crafts & Games |
Science Fiction & Fantasy |
Classics |
Mysteries & Detectives |
Action & Adventure |
Geography & Cultures |
Education & Reference |
Arts Music & Photography |
Holidays & Celebrations |
Science Nature & How It Works |
Early Learning |
Biographies |
History |
Children's Cookbooks |
Religions |
Sports & Outdoors |
Comics & Graphic Novels |
Computers & Technology |
World |
Americas |
Asia |
Military |
Europe |
Russia |
Africa |
Ancient Civilizations |
Middle East |
Historical Study & Educational Resources |
Australia & Oceania |
Arctic & Antarctica |
Agents |
machine learning(ML) |
information retrieval (IR) |
database (DB) |
human-computer interaction (HCI) |
artificial intelligence (AI) |
Case Based |
Genetic Algorithms |
Neural Networks |
Probabilistic Methods |
Reinforcement Learning |
Rule Learning |
Theory |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
null |
End of preview.
Overview
This dataset covers the encoder embeddings and prediction results of LLMs of paper 'Model Generalization on Text Attribute Graphs: Principles with Lagre Language Models', Haoyu Wang, Shikun Liu, Rongzhe Wei, Pan Li.
Dataset Description
The dataset structure should be organized as follows:
/dataset/
│── [dataset_name]/
│ │── processed_data.pt # Contains labels and graph information
│ │── [encoder]_x.pt # Features extracted by different encoders
│ │── categories.csv # label name raw texts
│ │── raw_texts.pt # raw text of each node
File Descriptions
processed_data.pt
: A PyTorch file storing the processed dataset, including graph structure and node labels. Note that in heterophilic datasets, thie is named as [Dataset].pt, where Dataset could be Cornell, etc, and should be opened with DGL.[encoder]_x.pt
: Feature matrices extracted using different encoders, where[encoder]
represents the encoder name.categories.csv
: raw label names.raw_texts.pt
: raw node texts. Note that in heterophilic datasets, this is named as [Dataset].csv, where Dataset can be Cornell, etc.
Dataset Naming Convention
[dataset_name]
should be one of the following:
cora
citeseer
pubmed
bookhis
bookchild
sportsfit
wikics
cornell
texas
wisconsin
washington
Encoder Naming Convention
[encoder]
can be one of the following:
sbert
(the sentence-bert encoder)roberta
(the Roberta encoder)llmicl_primary
(the vanilla LLM2Vec)llmicl_class_aware
(the task-adaptive encoder)llmgpt_text-embedding-3-large
(the embedding api text-embedding-3-large by openai)
Results Description
The ./results/ folder consists of prediction results of GPT-4o in node text classification and GPT-4o-mini in homophily ratio prediction.
./results/nc_[DATASET]/4o/llm_baseline # node text prediction
./results/nc_[DATASET]/4o_mini/agenth # homophily ratio prediction
- Downloads last month
- 540