QCRI
/

mbayan commited on
Commit
67a2b6c
·
verified ·
1 Parent(s): a397f21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -51,24 +51,28 @@ To utilize the LlamaLens model for inference, follow these steps:
51
  Use the transformers library to load the LlamaLens model and its tokenizer:
52
 
53
  ```python
54
- from transformers import AutoTokenizer, AutoModelForCausalLM
55
 
56
  model_name = "QCRI/LlamaLens"
57
- tokenizer = AutoTokenizer.from_pretrained(model_name)
58
- model = AutoModelForCausalLM.from_pretrained(model_name)
59
  ```
60
  3. **Prepare the Input:**:
61
  Tokenize your input text:
62
  ```python
63
  input_text = "Your input text here"
64
- inputs = tokenizer(input_text, return_tensors="pt")
 
 
 
 
 
 
65
  ```
66
  4. **Generate the Output:**:
67
  Generate a response using the model:
68
  ```python
69
- output = model.generate(**inputs)
70
- response = tokenizer.decode(output[0], skip_special_tokens=True)
71
- print(response)
72
  ```
73
 
74
  ## Results
 
51
  Use the transformers library to load the LlamaLens model and its tokenizer:
52
 
53
  ```python
54
+ from transformers import pipeline
55
 
56
  model_name = "QCRI/LlamaLens"
57
+ pipe = pipeline("text-generation", model=model_name)
 
58
  ```
59
  3. **Prepare the Input:**:
60
  Tokenize your input text:
61
  ```python
62
  input_text = "Your input text here"
63
+ system_message = "Your system message text here"
64
+ messages = [
65
+ {"role": "system", "content": system_message},
66
+ {"role": "user", "content": input_text},
67
+ ]
68
+
69
+
70
  ```
71
  4. **Generate the Output:**:
72
  Generate a response using the model:
73
  ```python
74
+ generated_text = pipe(messages, num_return_sequences=1)
75
+ print(generated_text)
 
76
  ```
77
 
78
  ## Results