license: gemma | |
datasets: | |
- anthracite-org/stheno-filtered-v1.1 | |
base_model: google/gemma-2-2b-it | |
 | |
# QuantFactory/Gemma-2-2B-Stheno-Filtered-GGUF | |
This is quantized version of [SaisExperiments/Gemma-2-2B-Stheno-Filtered](https://huggingface.co/SaisExperiments/Gemma-2-2B-Stheno-Filtered) created using llama.cpp | |
# Original Model Card | |
 | |
I don't have anything else so you get a cursed cat image | |
# Basic info | |
This is [anthracite-org/stheno-filtered-v1.1](https://huggingface.co/datasets/anthracite-org/stheno-filtered-v1.1) over [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) | |
It saw 76.6M tokens | |
This time it took 14 hours and i'm pretty sure i've been training with the wrong prompt template X-X | |
# Training config: | |
``` | |
cutoff_len: 1024 | |
dataset: stheno-3.4 | |
dataset_dir: data | |
ddp_timeout: 180000000 | |
do_train: true | |
finetuning_type: lora | |
flash_attn: auto | |
fp16: true | |
gradient_accumulation_steps: 8 | |
include_num_input_tokens_seen: true | |
learning_rate: 5.0e-05 | |
logging_steps: 5 | |
lora_alpha: 64 | |
lora_dropout: 0 | |
lora_rank: 64 | |
lora_target: all | |
lr_scheduler_type: cosine | |
max_grad_norm: 1.0 | |
max_samples: 100000 | |
model_name_or_path: unsloth/gemma-2-2b-it | |
num_train_epochs: 3.0 | |
optim: adamw_8bit | |
output_dir: saves/Gemma-2-2B-Chat/lora/stheno | |
packing: false | |
per_device_train_batch_size: 2 | |
plot_loss: true | |
preprocessing_num_workers: 16 | |
quantization_bit: 4 | |
quantization_method: bitsandbytes | |
report_to: none | |
save_steps: 100 | |
stage: sft | |
template: gemma | |
use_unsloth: true | |
warmup_steps: 0 | |
``` | |