Chatml format. The dataset is about 1400 entries ranging from 8-16k. It's split three ways between long context multi turn chat, long context summarization, and writing analysis. Full fine tune using linear a rope scale factor of 2.0. Trained for five epochs with a learning rate of 1e-5.

Downloads last month
29
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for openerotica/Llama-3-lima-nsfw-16k-test

Quantizations
1 model