gemma-7b non IT λ²μ μ±ν νμΈ νλλ λ²μ
κ°λ¨ν μ±ν ννμ λ°μ΄ν°λ‘ νμΈ νλλ λ²μ μ λλ€.
history
- 0.1 : 2024-04-05 μ΅μ΄ SFTλ²μ μ λ‘λ, DPOλ κ³ λ―Ό μ€
νΈλ μ΄λ μ 보
- μ¬μ©λ°μ΄ν°μ : maywell/koVast μ philschmid/gemma-tokenizer-chatml μ λ§κ² λ³μ‘°νμ¬ μ¬μ©
- GPU : RTX 3090 24G x 1
- optimizer : adamw_torch
- lr scheduler type : cosine
- νΈλ μ΄λ μκ° : 140μκ°
- μν¬ν¬ : 1
- train loss : 0.8991
- eval loss : 0.7305
μ¬μ©λ² (bfloat16, GPU λ©λͺ¨λ¦¬ μ½ 17κΈ°κ° νμ)
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
checkpoint = "nmj21c/gemma-7b-andj-sft"
dtype = torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(checkpoint, attn_implementation="flash_attention_2", device_map={"": 0}, torch_dtype=dtype)
toknizer_checkpoint = "philschmid/gemma-tokenizer-chatml"
tokenizer = AutoTokenizer.from_pretrained(toknizer_checkpoint)
chat = [
{"role": "system", "content": ""},
{"role": "user", "content": "μμΈμ κ°λ¨μμμ λ§μ§ μΆμ²ν΄μ€"},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
eos_token_str = "<|im_end|>"
eos_token = tokenizer(eos_token_str,add_special_tokens=False)["input_ids"][0]
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to("cuda:0")
outputs = model.generate(
input_ids=inputs.to(model.device),
max_new_tokens=1024,
eos_token_id=eos_token,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95,
)
response = tokenizer.decode(outputs[0])[len(prompt):].strip().replace(eos_token_str, '')
print(response)
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.