Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ You can use the model and tokenizer as follows:
|
|
33 |
First run the bash code below to clone the repository (this will take some time). Because of the custom model class, this model cannot be run with the usual huggingface Model setups.
|
34 |
|
35 |
```bash
|
36 |
-
git clone https://huggingface.co/hplisiecki/
|
37 |
```
|
38 |
|
39 |
Proceed as follows:
|
@@ -43,8 +43,10 @@ from word2affect_polish.model_script import CustomModel # importing the custom m
|
|
43 |
from transformers import PreTrainedTokenizerFast
|
44 |
|
45 |
model_directory = "word2affect_polish" # path to the cloned repository
|
46 |
-
|
47 |
-
|
|
|
|
|
48 |
inputs = tokenizer("This is a test input.", return_tensors="pt")
|
49 |
outputs = model(inputs['input_ids'], inputs['attention_mask'])
|
50 |
|
|
|
33 |
First run the bash code below to clone the repository (this will take some time). Because of the custom model class, this model cannot be run with the usual huggingface Model setups.
|
34 |
|
35 |
```bash
|
36 |
+
git clone https://huggingface.co/hplisiecki/word2affect_polish
|
37 |
```
|
38 |
|
39 |
Proceed as follows:
|
|
|
43 |
from transformers import PreTrainedTokenizerFast
|
44 |
|
45 |
model_directory = "word2affect_polish" # path to the cloned repository
|
46 |
+
|
47 |
+
model = CustomModel.from_pretrained(model_directory)
|
48 |
+
tokenizer = PreTrainedTokenizerFast.from_pretrained(model_directory)
|
49 |
+
|
50 |
inputs = tokenizer("This is a test input.", return_tensors="pt")
|
51 |
outputs = model(inputs['input_ids'], inputs['attention_mask'])
|
52 |
|