|
--- |
|
license: cc-by-sa-4.0 |
|
--- |
|
|
|
## emozilla/landmark-llama-7b |
|
|
|
This model is an out-of-the-box ready version of the LLaMA-7B variant of [Landmark Attention](https://arxiv.org/abs/2305.16300). |
|
The original code is modified from the [Landmark GitHub](https://github.com/epfml/landmark-attention) and the weights from [here](https://huggingface.co/epfml/landmark-attention-llama7b-wdiff). |
|
|
|
As a LLaMA variant, this model may be subject to the LLaMA license. |
|
|
|
### To use |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline |
|
import torch |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("emozilla/landmark-llama-7b", use_fast=False) |
|
model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \ |
|
torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") |
|
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
|
|
print(pipe("Somebody once told me the world is gonna roll me", \ |
|
max_new_tokens=256, temperature=0.8, do_sample=True)) |
|
``` |
|
|
|
You can configure the Landmark parameters by editing `mem_freq`, `mem_top_k`, `mem_max_seq_len`, and `mem_max_cache_size`. |
|
|
|
```python |
|
config = AutoConfig.from_pretrained("emozilla/landmark-llama-7b", trust_remote_code=True) |
|
config.mem_top_k = 6 |
|
model = AutoModelForCausalLM.from_pretrained("emozilla/landmark-llama-7b", \ |
|
torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto", config=config) |
|
``` |
|
|