Feedback: mildly disappointed there isn't 9B version
These models seem great, I'm not ungrateful at all or anything. But I really enjoyed Gemma 2 9B and it's still my main model while I wait for Ollama support. For me on my system it is the perfect mix of speed and intelligence, much better than similarly sized models like llama 8b. Next time I would appreciate if 9B model was released, it's really the perfect balance honestly rather it be because of the extra billion parameters or some black magic not sure. Hopefully 12B doesn't require much more memory or inference time, I'm not sure if 4B will be quite at the same level as 9B in every regard outside of benchmarks
P.s. thanks for upping the context that was a huge pain point with Gemma 2 for me and my usecases also vision is neat
Also, all the models hallucinate a bit too much. Especially with links, I would just not bother with trying to get it to link things without web use suboptimal behavior imo