|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
InstructProtein - AWQ |
|
- Model creator: https://huggingface.co/hicai-zju/ |
|
- Original model: https://huggingface.co/hicai-zju/InstructProtein/ |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
license: mit |
|
--- |
|
|
|
# InstructProtein |
|
|
|
InstructProtein is the first large generative language model exploring the feasibility of bidirectional generation between human and protein language. |
|
It is based on OPT-1.3B architecture with two-step training approach: It initiates with pre-training on protein and natural language corpora, followed by fine-tuning with the established protein knowledge instruction dataset. |
|
Through further instruction tuning, InstructProtein outperforms larger general-purpose foundation models on protein understanding and design tasks. |
|
|
|
## Limitations |
|
|
|
The current model, developed through instruction tuning using knowledge instruction dataset, serves as a preliminary example. |
|
Despite its initial success in controlled environments, it lacks the robustness to manage complex, real-world, production-level tasks. |
|
|
|
## Reference |
|
|
|
For more information, please take a look at our [paper](https://arxiv.org/abs/2310.03269) and [repository](https://github.com/HICAI-ZJU/InstructProtein). |
|
|
|
|
|
|
|
|
|
|