Trained on all the 6 different languages so it should hopefully be useful for all of them though the quality of the datasets probably vary a lot.

Uses ChatML as usual.

LoRA: mpasila/Viking-SlimInstruct-LoRA-V1-7B

Uses the following datasets:

saillab/alpaca-icelandic-cleaned, kobprof/skolegpt-instruct, tollefj/nor-instruct-cleaned, skvarre/sv-instruct-v1, Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k, LumiOpen/instruction-collection-fin, neph1/Alpaca-Lora-GPT4-Swedish-Refined

Uploaded Viking-SlimInstruct-V1-7B model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : LumiOpen/Viking-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for mpasila/Viking-SlimInstruct-V1-7B

Quantizations
2 models

Datasets used to train mpasila/Viking-SlimInstruct-V1-7B