gemma-2-9b-8bit / README.md
nev's picture
Update README.md
7b0678f verified
|
raw
history blame
602 Bytes
metadata
license: gemma
library_name: transformers

Gemma 2 9B 8-bit

This is an 8-bit quantized version of Gemma 2 9B. The models belong to Google and are licensed under the Gemma Terms of Use and are only stored in quantized form here for convenience.

How to use

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
dtype = torch.float16
model = AutoModelForCausalLM.from_pretrained("nev/gemma-2-9b-8bit", torch_dtype=dtype, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("nev/gemma-2-9b-8bit")