Dataset and hyperparameters for training
Hello,
what dataset and which hyperparameters have been used for the training?
Hello,
we've trained it on an internal dataset witht the basic hyperparmeters.
Hello @programmnix-askui ,
Thank you for your response! What exactly do you mean by "basic hyperparameters"? 😉
What learning rate and learning rate scheduler have you used? And call you tell something about the size of the dataset?
I intend to fine-tune your model on a custom internal GUI dataset. Could you provide some suggestions regarding LoRa parameters, like r or lora_alpha? I've read that the following modules are commonly trained when using Florence-2 models:
target_modules = ["q_proj", "o_proj", "k_proj", "v_proj", "linear", "Conv2d", "lm_head", "fc2"]
Furthermore, how should the dataset be structured? I assume it should follow the format <OPEN_VOCABULARY_DETECTION>{task_prompt}
, since you used the default OPEN_VOCABULARY_DETECTION task for Florence-2. However, it's unclear to me exactly how the task_prompt should be formulated. In your demo (https://huggingface.co/spaces/AskUI/PTA-1), the examples don’t seem to follow a typical imperative-style utterance.
For instance, this utterance fails:
But a simple combination of nouns works:
Additionally, referring expressions seems also to fail, e.g.:
Could you clarify the expected format for task_prompt?
Looking forward to your insights!