# Ollama
One-click deployment of local LLMs, that is [Ollama](https://github.com/ollama/ollama). ## Install - [Ollama on Linux](https://github.com/ollama/ollama/blob/main/docs/linux.md) - [Ollama Windows Preview](https://github.com/ollama/ollama/blob/main/docs/windows.md) - [Docker](https://hub.docker.com/r/ollama/ollama) ## Launch Ollama Decide which LLM you want to deploy ([here's a list for supported LLM](https://ollama.com/library)), say, **mistral**: ```bash $ ollama run mistral ``` Or, ```bash $ docker exec -it ollama ollama run mistral ``` ## Use Ollama in RAGFlow - Go to 'Settings > Model Providers > Models to be added > Ollama'.
> Base URL: Enter the base URL where the Ollama service is accessible, like, `http://:11434`. - Use Ollama Models.