fragata commited on
Commit
09984ee
verified
1 Parent(s): 89ecc6d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -25,8 +25,45 @@ tags:
25
  - max_seq_length = 32 768
26
  - bfloat16
27
 
 
28
  ## Usage with pipeline
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
 
32
  ## Citation
 
25
  - max_seq_length = 32 768
26
  - bfloat16
27
 
28
+
29
  ## Usage with pipeline
30
 
31
+ ```python
32
+ from transformers import pipeline, Qwen2ForCausalLM, AutoTokenizer
33
+
34
+ model = Qwen2ForCausalLM.from_pretrained("NYTK/PULI-Trio-Q")
35
+ tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-Trio-Q")
36
+ prompt = "Elmes茅lek egy t枚rt茅netet a nyelvtechnol贸gi谩r贸l."
37
+ generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer, device=0)
38
+
39
+ print(generator(prompt, max_new_tokens=30)[0]["generated_text"])
40
+ ```
41
+
42
+ Since the model was continuously pre-trained from Qwen2.5 7B **_Instruct_**, it can be used as a chat model.
43
+
44
+ ```python
45
+ import torch
46
+ from transformers import pipeline
47
+
48
+ model_id = "NYTK/PULI-Trio-Q"
49
+ pipe = pipeline(
50
+ "text-generation",
51
+ model=model_id,
52
+ torch_dtype=torch.bfloat16,
53
+ device_map="auto",
54
+ )
55
+ messages = [
56
+ {"role": "system", "content": "You are a helpful assistant"},
57
+ {"role": "user", "content": "Mit gondolsz a nyelvtechnol贸gi谩r贸l?"},
58
+ ]
59
+ outputs = pipe(
60
+ messages,
61
+ max_new_tokens=256,
62
+ )
63
+ print(outputs[0]["generated_text"][-1])
64
+ ```
65
+
66
+
67
 
68
 
69
  ## Citation