Finetuned on SmolInstruct's property prediction instruction dataset and HoneyBee's instruction dataset.
[LoRA Config Parameters] train: true, fine_tune_type: lora, seed: 0, num_layers: 8, batch_size: 2, iters: 1000, val_batches: 25, learning_rate: 1e-5, steps_per_report: 10, steps_per_eval: 200, resume_adapter_file: null, adapter_path: "adapters", save_every: 100, test: false, test_batches: 100, max_seq_length: 2048, grad_checkpoint: false, lora_parameters: keys: ["self_attn.q_proj", "self_attn.v_proj"] rank: 8 alpha: 8 dropout: 0.0 scale: 20.0
- Downloads last month
- 0
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Model tree for jarvisloh/Mistral-7B-Instruct-v0.3-Chemistry-gguf-v0.1
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3