Finetuned on SmolInstruct's property prediction instruction dataset and HoneyBee's instruction dataset.

[LoRA Config Parameters] train: true, fine_tune_type: lora, seed: 0, num_layers: 8, batch_size: 2, iters: 1000, val_batches: 25, learning_rate: 1e-5, steps_per_report: 10, steps_per_eval: 200, resume_adapter_file: null, adapter_path: "adapters", save_every: 100, test: false, test_batches: 100, max_seq_length: 2048, grad_checkpoint: false, lora_parameters: keys: ["self_attn.q_proj", "self_attn.v_proj"] rank: 8 alpha: 8 dropout: 0.0 scale: 20.0

Downloads last month
0
GGUF
Model size
7.25B params
Architecture
llama
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for jarvisloh/Mistral-7B-Instruct-v0.3-Chemistry-gguf-v0.1

Quantized
(119)
this model

Dataset used to train jarvisloh/Mistral-7B-Instruct-v0.3-Chemistry-gguf-v0.1