--- license: apache-2.0 tags: - openvino - openvino-export pipeline_tag: text-generation base_model: OuteAI/Lite-Oute-1-300M-Instruct --- This model was converted to OpenVINO from [`OuteAI/Lite-Oute-1-300M-Instruct`](https://huggingface.co/OuteAI/Lite-Oute-1-300M-Instruct) using [optimum-intel](https://github.com/huggingface/optimum-intel) via the [export](https://huggingface.co/spaces/echarlaix/openvino-export) space. # Lite-Oute-1-300M-Instruct Lite-Oute-1-300M-Instruct is a Lite series model based on the Mistral architecture, comprising approximately 300 million parameters.
This model aims to improve upon our previous 150M version by increasing size and training on a more refined dataset. The primary goal of this 300 million parameter model is to offer enhanced performance while still maintaining efficiency for deployment on a variety of devices.
With its larger size, it should provide improved context retention and coherence, however users should note that as a compact model, it still have limitations compared to larger language models.
The model was trained on 30 billion tokens with a context length of 4096. ## Available versions: Lite-Oute-1-300M-Instruct
Lite-Oute-1-300M-Instruct-GGUF
Lite-Oute-1-300M
Lite-Oute-1-300M-GGUF
## Chat format > [!IMPORTANT] > This model uses **ChatML** template. Ensure you use the correct template: ``` <|im_start|>system [System message]<|im_end|> <|im_start|>user [Your question or message]<|im_end|> <|im_start|>assistant [The model's response]<|im_end|> ``` ## Benchmarks:
Benchmark 5-shot 0-shot
ARC Challenge 26.37 26.02
ARC Easy 51.43 49.79
CommonsenseQA 20.72 20.31
HellaSWAG 34.93 34.50
MMLU 25.87 24.00
OpenBookQA 31.40 32.20
PIQA 65.07 65.40
Winogrande 52.01 53.75
First make sure you have optimum-intel installed: ```bash pip install optimum[openvino] ``` To load your model you can do as follows: ```python from optimum.intel import OVModelForCausalLM model_id = "FM-1976/Lite-Oute-1-300M-Instruct-openvino" model = OVModelForCausalLM.from_pretrained(model_id) ```