KLimaLima commited on
Commit
8996941
·
verified ·
1 Parent(s): 35f3351

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -9
README.md CHANGED
@@ -16,12 +16,12 @@ Download the model
16
  # This is to set the path to save the model
17
  from pathlib import Path
18
 
19
- mistral_models_path = Path.home().joinpath('Question_Generation_model', 'UTeMGPT')
20
- mistral_models_path.mkdir(parents=True, exist_ok=True)
21
 
22
  # Download the model
23
  from huggingface_hub import snapshot_download
24
- my_model = snapshot_download(repo_id="KLimaLima/finetuned-Question-Generation-mistral-7b-instruct", local_dir=mistral_models_path)
25
  ```
26
 
27
  To load the model that have been downloaded
@@ -40,12 +40,6 @@ model, tokenizer = FastLanguageModel.from_pretrained(
40
  FastLanguageModel.for_inference(model) # Enable native 2x faster inference
41
  ```
42
 
43
- To generate output
44
- ```python
45
- outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
46
- tokenizer.batch_decode(outputs)
47
- ```
48
-
49
  This model uses alpaca prompt format such as below
50
  ```python
51
  alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
@@ -58,6 +52,25 @@ alpaca_prompt = """Below is an instruction that describes a task, paired with an
58
 
59
  ### Response:
60
  {}"""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ```
62
 
63
  # Uploaded model
 
16
  # This is to set the path to save the model
17
  from pathlib import Path
18
 
19
+ models_path = Path.home().joinpath('Question_Generation_model', 'UTeMGPT')
20
+ models_path.mkdir(parents=True, exist_ok=True)
21
 
22
  # Download the model
23
  from huggingface_hub import snapshot_download
24
+ my_model = snapshot_download(repo_id="KLimaLima/finetuned-Question-Generation-mistral-7b-instruct", local_dir=models_path)
25
  ```
26
 
27
  To load the model that have been downloaded
 
40
  FastLanguageModel.for_inference(model) # Enable native 2x faster inference
41
  ```
42
 
 
 
 
 
 
 
43
  This model uses alpaca prompt format such as below
44
  ```python
45
  alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
 
52
 
53
  ### Response:
54
  {}"""
55
+
56
+ instruction = 'Write an inquisitive question about a specific text span in a given sentence such that the answer is not in the text.'
57
+ sentence = "I want to bake a cake during my free time. I need to know the ingredients that need to be use."
58
+
59
+ inputs = tokenizer(
60
+ [
61
+ alpaca_prompt.format(
62
+ instruction,
63
+ sentence,
64
+ "", # output - leave this blank for generation!
65
+ )
66
+ ], return_tensors = "pt").to("cuda")
67
+
68
+ ```
69
+
70
+ To generate output
71
+ ```python
72
+ outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
73
+ tokenizer.batch_decode(outputs)
74
  ```
75
 
76
  # Uploaded model