katuni4ka commited on
Commit
4f81253
·
verified ·
1 Parent(s): 1c7e67d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -1
README.md CHANGED
@@ -30,7 +30,8 @@ The provided OpenVINO™ IR model is compatible with:
30
  * OpenVINO version 2024.4.0 and higher
31
  * Optimum Intel 1.23.1 and higher
32
 
33
- ## Running Model Inference
 
34
 
35
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
36
 
@@ -57,6 +58,37 @@ print(text)
57
 
58
  For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ## Limitations
61
 
62
  Check the original model card for [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for limitations.
 
30
  * OpenVINO version 2024.4.0 and higher
31
  * Optimum Intel 1.23.1 and higher
32
 
33
+ ## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
34
+
35
 
36
  1. Install packages required for using [Optimum Intel](https://huggingface.co/docs/optimum/intel/index) integration with the OpenVINO backend:
37
 
 
58
 
59
  For more examples and possible optimizations, refer to the [OpenVINO Large Language Model Inference Guide](https://docs.openvino.ai/2024/learn-openvino/llm_inference_guide.html).
60
 
61
+ ## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
62
+
63
+ 1. Install packages required for using OpenVINO GenAI.
64
+ ```
65
+ pip install openvino-genai huggingface_hub
66
+ ```
67
+
68
+ 2. Download model from HuggingFace Hub
69
+
70
+ ```
71
+ import huggingface_hub as hf_hub
72
+
73
+ model_id = "OpenVINO/Phi-3-mini-4k-instruct-int4-ov"
74
+ model_path = "Phi-3-mini-4k-instruct-int4-ov"
75
+
76
+ hf_hub.snapshot_download(model_id, local_dir=model_path)
77
+
78
+ ```
79
+
80
+ 3. Run model inference:
81
+
82
+ ```
83
+ import openvino_genai as ov_genai
84
+
85
+ device = "CPU"
86
+ pipe = ov_genai.LLMPipeline(model_path, device)
87
+ print(pipe.generate("What is OpenVINO?", max_length=200))
88
+ ```
89
+
90
+ More GenAI usage examples can be found in OpenVINO GenAI library [docs](https://github.com/openvinotoolkit/openvino.genai/blob/master/src/README.md) and [samples](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#openvino-genai-samples)
91
+
92
  ## Limitations
93
 
94
  Check the original model card for [original model card](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) for limitations.