modernBERT for Norwegian.
Have you thought about training a Norwegian modernBERT(https://huggingface.co/answerdotai/ModernBERT-base) model?
That would be very useful.
Yes, we are planning to release s collection of new NorBERTs that will be more optimized for inference speed :)
Great news! Thanks you.
Will that also include the possible token length?
What exactly do you mean by that? :)
Thank you for answer and my apologies, I some how stopped mid sentence.
My question was suppose to say:
In the blog post introducing modernBERT they also say that they will increase the possibility for a sequence length of up to 8192 tokens. Is this something you will look into doing? :)
I see :) Yes, we will increase the sequence length. But note that even the current NorBERT3 is able to accept longer sequences than the 512 tokens it has been trained on, thanks to its bucketed relative positional encoding.