Feedback

#1
by Huzderu - opened

Tested on a long RP (30k + context ). I really like the longer replies, and while I know this is an overused term, it seems very creative when compared to other Llama finetunes I've tried. Unfortunately, it seems to break down in longer context, with logical errors and repetition issues coming in to play. Definitely feels different to v0.2, which was more succinct.

Thanks for sharing your experience! I think all these models still struggle with long context, as you noticed. I'm hoping that Llama 4 and other SOTA models in 2025 will improve how they handle details as the context size grows into the 32K range and beyond.

Sign up or log in to comment