|
--- |
|
|
|
|
|
{} |
|
--- |
|
Encoder only version of the [ANKH2 large model](https://huggingface.co/ElnaggarLab/ankh2-ext1) (paper not released yet for ANKH2). The encoder only version is ideal for protein representation tasks. |
|
|
|
## To download |
|
```python |
|
from transformers import T5EncoderModel, AutoTokenizer |
|
|
|
model_path = 'Synthyra/ANKH2_large' |
|
model = T5EncoderModel.from_pretrained(model_path) |
|
tokenizer = AutoTokenizer.from_pretrained(model_path) |
|
``` |
|
|
|
We are working on implementing a version of T5 based PLMs with [Flex attention](https://pytorch.org/blog/flexattention/) once learned relative position bias is supported (used in T5). Stay tuned. |