DevParker's picture
Update README.md
81c02c8 verified
metadata
license: other
language:
  - en
tags:
  - text-to-speech
  - speech-synthesis
  - quantized
  - low-vram
  - vibevoice

VibeVoice 7B - Low VRAM Quantized Models

Pre-quantized versions of VibeVoice 7B for low VRAM GPUs.

Available Versions

  • 4bit/ - 4-bit quantized model (~8GB VRAM needed)
  • 8bit/ - 8-bit quantized model (~12GB VRAM needed) - NOTE: Removed 8 bit until I can test it again. I'll re-up it soon.

Usage

from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference
from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor

# For 4-bit model
model = VibeVoiceForConditionalGenerationInference.from_pretrained(
    "Dannidee/VibeVoice7b-low-vram/4bit",
    device_map='cuda',
    torch_dtype=torch.bfloat16,
)
processor = VibeVoiceProcessor.from_pretrained("Dannidee/VibeVoice7b-low-vram/4bit")

VRAM Requirements

  • 4-bit: ~8 GB total VRAM
  • 8-bit: ~12 GB total VRAM
  • Original: ~19 GB total VRAM

See individual model folders for detailed information.