This model was trained 2x faster with Unsloth and Huggingface's TRL library.
This is an experiment on fixing models with incorrect behaviors.
This experiment serves to test and refine a specific training and evaluation pipeline research framework. Its primary objective is to identify potential optimizations, with a focus on data engineering, architectural efficiency, and evaluation performance.
The goal of this experiment is to evaluate the effectiveness of a new training and evaluation pipeline for Large Language Models (LLMs). To achieve this, we will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
Quantized version (GGUF)
Thank Yam for your incredible experiment & the Unsloth Community!
PS: Numero uno brothers!
- Downloads last month
- 153
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for BarraHome/Mistroll-7B-v2.2
Base model
yam-peleg/Experiment26-7B