File size: 4,355 Bytes
4901fe7 58dcff1 3a5d177 58dcff1 4b535ef 58dcff1 86d4b29 58dcff1 4901fe7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 |
---
license: mit
pipeline_tag: text-generation
tags:
- biology
- genomics
- long-context
library_name: transformers
---
# GENERator-eukaryote-3b-base model
## Abouts
In this repository, we present GENERator, a generative genomic foundation model featuring a context length of 98k base pairs and 3B parameters, trained on an expansive dataset comprising 386 billion base pairs of eukaryotic DNA. The extensive and diverse pre-training data endow the GENERator with enhanced understanding and generation capabilities across various organisms.
For more technical details, please refer to our paper [GENERator: A Long-Context Generative Genomic Foundation Model](https://huggingface.co/GenerTeam).
## How to use
### Simple example1: generation
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model.
tokenizer = AutoTokenizer.from_pretrained("GenerTeam/GENERator-eukaryote-3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("GenerTeam/GENERator-eukaryote-3b-base")
config = model.config
max_length = config.max_position_embeddings
# Define input sequences.
sequences = [
"ATGAGGTGGCAAGAAATGGGCTAC",
"GAATTCCATGAGGCTATAGAATAATCTAAGAGAAAT"
]
# Process the sequences
sequences = [tokenizer.bos_token + sequence for sequence in sequences]
# Tokenize the sequences
tokenizer.padding_side = "left"
inputs = tokenizer(
sequences,
add_special_tokens=False,
return_tensors="pt",
padding=True,
truncation=True,
max_length=max_length
)
# Generate the sequences
with torch.inference_mode():
outputs = model.generate(**inputs, max_new_tokens=32, temperature=0.00001, top_k=1)
# Decode the generated sequences
decoded_sequences = tokenizer.batch_decode(outputs, skip_special_tokens=True)
# Print the decoded sequences
print(decoded_sequences)
# It is expected to observe non-sense decoded sequences (e.g., 'AAAAAA')
# The input sequences are too short to provide sufficient context.
```
### Simple example2: embedding
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the tokenizer and model.
tokenizer = AutoTokenizer.from_pretrained("GENERator-eukaryote-3b-base", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("GENERator-eukaryote-3b-base")
config = model.config
max_length = config.max_position_embeddings
# Define input sequences.
sequences = [
"ATGAGGTGGCAAGAAATGGGCTAC",
"GAATTCCATGAGGCTATAGAATAATCTAAGAGAAAT"
]
# Tokenize the sequences with add_special_tokens=True to automatically add special tokens,
# such as the BOS EOS token, at the appropriate positions.
tokenizer.padding_side = "right"
inputs = tokenizer(
sequences,
add_special_tokens=True,
return_tensors="pt",
padding=True,
truncation=True,
max_length=max_length
)
# Perform a forward pass through the model to obtain the outputs, including hidden states.
with torch.inference_mode():
outputs = model(**inputs, output_hidden_states=True)
# Retrieve the hidden states from the last layer.
hidden_states = outputs.hidden_states[-1] # Shape: (batch_size, sequence_length, hidden_size)
# Use the attention_mask to determine the index of the last token in each sequence.
# Since add_special_tokens=True is used, the last token is typically the EOS token.
attention_mask = inputs["attention_mask"]
last_token_indices = attention_mask.sum(dim=1) - 1 # Index of the last token for each sequence
# Extract the embedding corresponding to the EOS token for each sequence.
seq_embeddings = []
for i, token_index in enumerate(last_token_indices):
# Fetch the embedding for the last token (EOS token).
seq_embedding = hidden_states[i, token_index, :]
seq_embeddings.append(seq_embedding)
# Stack the embeddings into a tensor with shape (batch_size, hidden_size)
seq_embeddings = torch.stack(seq_embeddings)
print("Sequence Embeddings:", seq_embeddings)
```
## Citation
```
@misc{wu2025generator,
title={GENERator: A Long-Context Generative Genomic Foundation Model},
author={Wei Wu and Qiuyi Li and Mingyang Li and Kun Fu and Fuli Feng and Jieping Ye and Hui Xiong and Zheng Wang},
year={2025},
eprint={2502.07272},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.07272},
}
```
|