|
--- |
|
license: mit |
|
base_model: |
|
- paust/pko-t5-base |
|
pipeline_tag: text2text-generation |
|
--- |
|
|
|
paust/pko-t5-base model based |
|
|
|
Since this model is based on paust/pko-t5-base tokenizer, you need to import it. |
|
```from transformers import T5TokenizerFast, T5ForConditionalGeneration |
|
tokenizer = T5TokenizerFast.from_pretrained("paust/pko-t5-base") |
|
model = T5ForConditionalGeneration.from_pretrained("emotionanalysis/diaryempathizer-t5-ko") |
|
``` |
|
|
|
Test code |
|
```import torch |
|
from transformers import T5TokenizerFast, T5ForConditionalGeneration |
|
|
|
model = T5ForConditionalGeneration.from_pretrained("emotionanalysis/diaryempathizer-t5-ko") |
|
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") |
|
model.to(device) |
|
|
|
tokenizer = T5TokenizerFast.from_pretrained("paust/pko-t5-base") |
|
input_text = """ |
|
์ค๋์ ์ ๋ง๋ก ์ฆ๊ฑฐ์ด ๋ ์ด์๋ค. ๋ฉฐ์น ์งธ ์ ๋ ๋ชป ์๊ณ ์๊ณ ๋ฆฌ์ฆ ์์
์ ๊ธฐ๋ ์ค์ด๋ค. ์ด์ ์ ๋ง๋ก ๋๋ ๋ฒํฐ๊ธฐ ์ด๋ ต๊ฒ ๋ค๊ณ ๋ณด์ธ๋ค. ๋ฎ์ ๊ต์๋์ด ๊ฐ์๊ธฐ ์ด์ ์ ์ผ๋ก ์์
์ ํ์๋ ๊ฒ์ด์๋ค. ๊ทธ๋ฌ๋ค๊ฐ automata์ |
|
๋ํด ์ค๋ช
ํ์๋๋ฐ, ์ ๋ง ๊ฐ๋์ ๋๋ฌผ๊ณผ ๊ฒฝํ์ ๊ธํ ๊ธธ ์์๊ณ , ์ด ์์
์ ๋ฃ๊ธฐ ์ํด ํ์ด๋ฌ๋ค๋ ์๊ฐ์ด ๋ค์์ผ๋ฉฐ, ๊ณผ์ (ํฌ์)๊น์ง ์ฃผ์๋ ๊ฒ์ด์๋ค. |
|
๋๋ ํฉํ๊ฒฝ์ ๋น ์ก๋ค. ๋ชฉ์จ์ ๋ฐ์น๊ณ ์ ์์ ๋ด๋์ ธ์ผ ํ ์ธ์์ ์ด์ ๋ฅผ ์ฐพ์๊ธฐ ๋๋ฌธ์ด๋ค. |
|
""" |
|
|
|
inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True, padding="max_length") |
|
inputs = {key: value.to(device) for key, value in inputs.items()} |
|
outputs = model.generate(input_ids=inputs["input_ids"], max_length=128, num_beams=4, early_stopping=True) |
|
|
|
generated_comment = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
print(generated_comment) |
|
``` |
|
|