The issue of VRAM usage in image extraction
#35
by
jimmyseven
- opened
I'm feeding 13512*512 image into a float16 model, and a single image is taking up 6GB of VRAM. This clearly makes batch training impossible. Is this reasonable?
Hey @jimmyseven ,thanks for reaching out! Would it be easy to share a code snippet to reproduce this?