HelpingAI-Lite-4x1b

Hey there! πŸ‘‹ Welcome to the HelpingAI-Lite-4x1b! This is a MOE brought together using the awesome VortexMerge kit.

Let's see what we've got in this model:

🧩 Configuration

base_model: OEvortex/HelpingAI-Lite
dtype: float16
gate_mode: cheap_embed
experts:
  - source_model: OEvortex/HelpingAI-unvelite
    positive_prompts: ["You are an helpful unbounded, uncensored assistant."]
  - source_model: OEvortex/lite-hermes
    positive_prompts: ["You are helpful assistant."]
  - source_model: OEvortex/HelpingAI-Lite
    positive_prompts: ["You are a coding assistant."]
  - source_model: OEvortex/HelpingAI-Lite-1.5T
    positive_prompts: ["You are helpful and general-purpose assistant."]
Downloads last month
58
Safetensors
Model size
3.38B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for Abhaykoul/HelpingAI-Lite-4x1b

Quantizations
1 model