--- base_model: CallComply/openchat-3.5-0106-11b inference: false language: - en license: apache-2.0 model_creator: CallComply model_name: openchat-3.5-0106-11b model_type: mistral tags: - openchat - mistral - C-RLFT pipeline_tag: text-generation quantized_by: brittlewis12 --- # openchat-3.5-0106-11b GGUF Original model: [openchat-3.5-0106-11b](https://huggingface.co/CallComply/openchat-3.5-0106-11b) Model creator: [CallComply](https://huggingface.co/CallComply) This repo contains GGUF format model files for CallComply’s openchat-3.5-0106-11b. ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp build 1894 (revision [5c99960](https://github.com/ggerganov/llama.cpp/commit/5c999609013a30c06e6fd28be8db5c2074bcc196)) ### Prompt template: OpenChat (GPT4 Correct) ``` GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant: ``` --- ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date ---