https://github.com/zejunwang1/bloom_tuning
ε―δ»₯ιθΏε¦δΈδ»£η θ°η¨ bloom-820m-chat 樑εζ₯ηζε―Ήθ―οΌ
from transformers import BloomTokenizerFast, BloomForCausalLM
model_name_or_path = "WangZeJun/bloom-820m-chat"
tokenizer = BloomTokenizerFast.from_pretrained(model_name_or_path)
model = BloomForCausalLM.from_pretrained(model_name_or_path).cuda()
model = model.eval()
input_pattern = "{}</s>"
text = "δ½ ε₯½"
input_ids = tokenizer(input_pattern.format(text), return_tensors="pt").input_ids
input_ids = input_ids.cuda()
outputs = model.generate(input_ids, do_sample=True, max_new_tokens=1024, top_p=0.85,
temperature=0.3, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
input_ids_len = input_ids.size(1)
response_ids = outputs[0][input_ids_len:]
response = tokenizer.decode(response_ids)
print(response)
- Downloads last month
- 1,555
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.