General discussion/feedback.
[Q5_K_S-imat]
This really retains a lot of Layla's basedness.
Vision works as expected.
Honestly, it did respond to all questions no matter how controversial or dangerous, and trust me I test with some wild stuff. It might add a disclaimer or notice at the beginning or end of responses for some topics but it will answer and comply. If this holds to the quality of V4, as it seems it does, this is a big win...:
Getting such a good performing model that understands and does formatting well, in the 7B parameter size, that will comply to questionable requests while being compatible for vision?
I think this time, you really cooked good.
I may also be biased because I like models merged with Layla, but I am carefully optimistic.
Getting such a good performing model that understands and does formatting well, in the 7B parameter size, that will comply to questionable requests while being compatible for vision?
Dont forget it should be good with up to 32k context as well ;) Appreciate the feedback, been running 4.20 heavy in testing/ personal use. It feels a tad dumb, guessing somewhere around 69 on the memerboard, but with some adjustments to how my cards are written im seeing it perform quite well.