Hypernova-experimental

Quantized to GGUF using llama.cpp

Tried some new stuff this time around. Very different outcome than I expected. This is an experimental model that was created for the development of NovaAI.

Good at chatting and some RP. Sometimes gets characters mixed up. Can occasionally struggle with context.

Prompt Template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Models Merged

The following models were included in the merge:

Some finetuning done as well

Downloads last month
52
GGUF
Model size
13B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for theNovaAI/Hypernova-experimental-GGUF

Quantized
(12)
this model

Collection including theNovaAI/Hypernova-experimental-GGUF