This is basically a test to see if the conversion and inference in llama.cpp works fine It seems to work though i wont add more quant sizes for now
Since this is merely a quantization of the original model the license of the original model still applies!
- Downloads last month
- 31
							Hardware compatibility
						Log In
								
								to view the estimation
16-bit
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support
Model tree for QuantStack/InternVL3_5-1B-Instruct-gguf
Base model
OpenGVLab/InternVL3_5-1B-Pretrained
				Finetuned
	
	
OpenGVLab/InternVL3_5-1B-Instruct