|
--- |
|
language: hr |
|
tags: |
|
- GPT-2 |
|
datasets: |
|
- hrwac |
|
--- |
|
If you use this model for own tasks, please share your results in the community tab. |
|
|
|
|
|
With Tensorflow you can use: |
|
```python |
|
from transformers import GPT2Tokenizer, TFGPT2Model |
|
|
|
tokenizer = GPT2Tokenizer.from_pretrained("domsebalj/GPcroaT") |
|
model = TFGPT2LMHeadModel.from_pretrained("domsebalj/GPcroaT") |
|
|
|
text = "Zamijeni ovaj tekst vlastitim" |
|
|
|
input_ids = tokenizer.encode(text, return_tensors='tf') |
|
|
|
beam_output = model.generate( |
|
input_ids, |
|
max_length = 80, |
|
min_length = 10, |
|
num_beams = 10, |
|
temperature = 5.7, |
|
no_repeat_ngram_size=2, |
|
num_return_sequences=5, |
|
repetition_penalty =7.5, |
|
length_penalty = 1.5, |
|
top_k = 50 |
|
) |
|
|
|
output = [] |
|
for i in beam_output: |
|
output.append(tokenizer.decode(i)) |
|
|
|
print(output) |
|
``` |
|
|