--- license: other language: - en base_model: microsoft/Orca-2-13b datasets: - HuggingFaceH4/no_robots --- The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots. It was trained on the entire dataset, and achieved a test loss of 0.86. Make sure to comply with the microsoft research license. Please read it before using this model. This model was trained on the following chat template: "<|USER|> message <|ASSISTANT|> message"