HIGHT / README.md
lfhase's picture
Update README.md
ebc253b verified
---
license: cc-by-nc-4.0
---
<h1 align="center">HIGHT: Hierarchical Graph Tokenization for Graph-Language Alignment</h1>
<p align="center">
<a href="https://arxiv.org/abs/2406.14021"><img src="https://img.shields.io/badge/arXiv-2406.14021-b31b1b.svg" alt="Paper"></a>
<a href="https://github.com/LFhase/HIGHT"><img src="https://img.shields.io/badge/-Github-grey?logo=github" alt="Github"></a>
<!-- <a href="https://colab.research.google.com/drive/1t0_4BxEJ0XncyYvn_VyEQhxwNMvtSUNx?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Colab"></a> -->
<a href="https://arxiv.org/abs/2406.14021"> <img alt="License" src="https://img.shields.io/static/v1?label=Pub&message=ICML%2725&color=blue"> </a>
<!-- <a href="https://github.com/LFhase/HIGHT/blob/main/LICENSE"> <img alt="License" src="https://img.shields.io/github/license/LFhase/CIGA?color=blue"> </a> -->
<!-- <a href="https://icml.cc/virtual/2024/poster/3455"> <img src="https://img.shields.io/badge/Video-grey?logo=Kuaishou&logoColor=white" alt="Video"></a> -->
<!-- <a href="https://lfhase.win/files/slides/HIGHT.pdf"> <img src="https://img.shields.io/badge/Slides-grey?&logo=MicrosoftPowerPoint&logoColor=white" alt="Slides"></a> -->
<!-- <a href="https://icml.cc/media/PosterPDFs/ICML%202022/a8acc28734d4fe90ea24353d901ae678.png"> <img src="https://img.shields.io/badge/Poster-grey?logo=airplayvideo&logoColor=white" alt="Poster"></a> -->
</p>
This repo contains the model checkpoints of our ICML 2025 paper: *[Hierarchical Graph Tokenization for Molecule-Language Alignment](https://arxiv.org/abs/2406.14021)*, which has also been presented at ICML 2024 workshop on [Foundation Models in the Wild](https://icml.cc/virtual/2024/workshop/29954). πŸ˜†πŸ˜†πŸ˜†
## File Structures
The pretrained Hierarchical VQ-VAE model is stored in `hivqvae.pth`.
The checkpoints of graph-language models based on llama2-7b-chat and vicuna-v1-3-7b are contained in `/llama2` and `/vicuna`, respectively.
Inside each directory, the remaining checkpoints are organized as (using vicuna as an example):
- `llava-hvqvae2-vicuna-v1-3-7b-pretrain`: model after stage 1 pretraining;
- `graph-text-molgen`: models finetuned using Mol-Instruction data under different tasks, e.g., forward reaction prediction;
- `molcap-llava-hvqvae2-vicuna-v1-3-7b-finetune_lora-50ep`: model fintuned using CHEBI-20 dataset for molecular captioning;
- `MoleculeNet-llava-hvqvae2-vicuna-v1-3-7b-finetune_lora-large*`: models finetuned via different classification-based molecular property prediction tasks;
## Citation
If you find our model, paper and repo useful, please cite our paper:
```bibtex
@inproceedings{chen2025hierarchical,
title={Hierarchical Graph Tokenization for Molecule-Language Alignment},
author={Yongqiang Chen and Quanming Yao and Juzheng Zhang and James Cheng and Yatao Bian},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
url={https://openreview.net/forum?id=wpbNczwAwV}
}
```