# TODO: - [x] Save LoRA separately - [ ] Load LoRA separately - [ ] Merge LoRA # How to use 1. Setup the environment ```bash conda create -n llava python=3.10 -y conda activate llava pip install --upgrade pip # Enable PEP 660 support. pip install -e ".[train]" ``` 2. Dataset The dataset is stored in a json file. Each item is in the following format: ```json { "id": "", "image": "/path/to/image", "conversations": [ { "from": "human", "value": "" }, { "from": "gpt", "value": "" } ], ... } ``` Modify the `--data_path` flag, which should be a folder containing `train.json` and `test.json`. 3. Pre-trained ckpt Usually, huggingface will download the pretrained ckpt automatically. But sometimes it will take a lot of time if the server is in mainland. Alternatively, you can find the pretrained ckpt under `61.170.32.4:/mnt1/lyc/huggingface/hub`. 4. Finetuning Sample bash ```bash bash scripts/finetune/train_llava.sh ``` 5. Testing Sample bash ```bash bash scripts/finetune/test_llava.sh ```