InternVL3.5 FP8
Collection
OpenGVLabs InternVL3.5 models quantized to FP8
•
8 items
•
Updated
This is an FP8 dynamically quantized (W8A8) version of OpenGVLab/InternVL3_5-38B
optimized for high-performance inference with vLLM.
The quantization process uses a specialized recipe that preserves the model's core visual understanding capabilities while reducing the memory footprint by nearly 40%.
You can serve the model using vLLM's OpenAI-compatible API server.
vllm serve brandonbeiler/InternVL3_5-38B-FP8-Dynamic \
--quantization compressed-tensors \
--served-model-name internvl3_5-38b \
--reasoning-parser qwen3 \
--trust-remote-code \
--max-model-len 32768 \
--tensor-parallel-size 1 # Adjust based on your GPU setup
Notes
Attribute | Value |
---|---|
Original Model | OpenGVLab/InternVL3_5-38B |
Quantized Model | brandonbeiler/InternVL3_5-38B-FP8-Dynamic |
Quantization Method | FP8 Dynamic (W8A8) |
Quantization Library | LLM Compressor v0.7.1 |
Quantized By | brandonbeiler |
The following snippet demonstrates inference using the vLLM library.
from vllm import LLM, SamplingParams
# Load the quantized model
# trust_remote_code is required to load the custom model architecture. [32, 44, 45, 48]
model = LLM(
model="brandonbeiler/InternVL3_5-38B-FP8-Dynamic",
trust_remote_code=True,
max_model_len=32768, # InternVL 3.5 supports a 32k context length. [19, 41]
tensor_parallel_size=1, # Adjust for your hardware setup. [11, 15, 38, 40]
)
# Set sampling parameters
# A temperature of 0.6 is recommended for this model. [39]
sampling_params = SamplingParams(temperature=0.6, max_tokens=512)
# Generate a response
# Note: Replace "<image>" with your image input
prompt = "Describe this image: <image>"
response = model.generate(prompt, sampling_params)
print(response[0].outputs[0].text)
This model was quantized using the following environment:
llmcompressor==0.7.1
compressed-tensors==0.10.2
transformers==4.55.0
torch==2.7.1
vllm==0.10.1.1
Quantized with ❤️ using LLM Compressor for the open-source community.
Base model
OpenGVLab/InternVL3_5-38B-Pretrained