Compiled lmsys/vicuna-7b-v1.5 using optimum-neuron (optimum-neuron==0.0.21 neuron with 2.18.2)

optimum-cli export neuron --model lmsys/vicuna-7b-v1.5 --batch_size 1 --sequence_length 1024 --num_cores 2 --auto_cast_type fp16  ./models/lmsys/vicuna-7b-v1.5
Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.