Spaces:
Running
Running
File size: 3,004 Bytes
0c348ce 6e94d47 0c348ce 6e94d47 f85c88e 6e94d47 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
title: README
emoji: 🐢
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
---

### Intel on Hugging Face
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Hugging Face libraries.
Get started with deploying Intel's models on Intel® architecture with these hands-on tutorials from blogs written by engineers from Hugging Face and Intel:
| Blog | Description |
| :--- | :--- |
| [Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon](https://huggingface.co/blog/cost-efficient-rag-applications-with-intel) | Develop and deploy RAG applications as part of OPEA, the Open Platform for Enterprise AI |
| [A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake](https://huggingface.co/blog/phi2-intel-meteor-lake) | Deploy Phi-2 on your local laptop with Intel OpenVINO in the Optimum Intel library |
### Get started on Intel architecture with Optimum Intel and Optimum Habana
To get started with Hugging Face Transformers software on Intel, visit the resources listed below.
*Optimum Intel* - To deploy on Intel® Xeon, Intel® Max Series GPU, and Intel® Core Ultra, check out [optimum-intel](https://github.com/huggingface/optimum-intel), the interface between Intel architectures and the 🤗 Transformers and Diffusers libraries. You can use these backends:
| Backend | Installation |
|:---|:---|
| [OpenVINO™](https://huggingface.co/docs/optimum/en/intel/inference) | `pip install --upgrade --upgrade-strategy eager "optimum[openvino]"` |
| [Intel® Extension for PyTorch*](https://intel.github.io/intel-extension-for-pytorch/#introduction) | `pip install --upgrade --upgrade-strategy eager "optimum[ipex]"` |
| [Intel® Neural Compressor](https://huggingface.co/docs/optimum/en/intel/optimization_inc) | `pip install --upgrade --upgrade-strategy eager "optimum[neural-compressor]"` |
*Optimum Habana* - To deploy on Intel® Gaudi® AI accelerators, check out [optimum-habana](https://github.com/huggingface/optimum-habana/), the interface between Gaudi and the 🤗 Transformers and Diffusers libraries. To install the latest stable release:
```bash
pip install --upgrade-strategy eager optimum[habana]
```
### Ways to get involved
Check out the [Intel® Tiber™ AI Cloud](https://cloud.intel.com) to run your latest GenAI or LLM workload on Intel architecture.
Want to share your model fine-tuned on Intel architecture? And for more detailed deployment tips and sample code, please visit the "Deployment Tips" tab from the [Powered-by-Intel LLM Leaderboard](https://huggingface.co/spaces/Intel/powered_by_intel_llm_leaderboard).
Join us on the [Intel DevHub Discord](https://discord.gg/kfJ3NKEw5t) to ask questions and interact with our AI developer community.
|