Umbral-Mind-r128-LoRA
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B and uses NousResearch/Meta-Llama-3-8B-Instruct as a base.
Parameters
The following command was used to extract this LoRA adapter:
/usr/local/bin/mergekit-extract-lora --out-path=loras/Umbral-Mind-r128-LoRA --model=Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B --base-model=NousResearch/Meta-Llama-3-8B-Instruct --no-lazy-unpickle --max-rank=128 --gpu-rich -v --embed-lora --skip-undecomposable
- Downloads last month
- 57
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for kromcomp/L3-Umbral-Mind-r128-LoRA
Base model
Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B