--- license: apache-2.0 language: - en base_model: - internlm/Intern-S1-mini base_model_relation: quantized pipeline_tag: image-text-to-text tags: - chat --- # Intern-S1-mini-GGUF Model ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642695e5274e7ad464c8a5ba/E43cgEXBRWjVJlU_-hdh6.png)

👋 join us on Discord and WeChat

## Introduction The `Intern-S1-mini` model in GGUF format can be utilized by [llama.cpp](https://github.com/ggerganov/llama.cpp), a highly popular open-source framework for Large Language Model (LLM) inference, across a variety of hardware platforms, both locally and in the cloud. This repository offers `Intern-S1-mini` models in GGUF format in both half precision and various low-bit quantized versions, including `q8_0`. In the subsequent sections, we will first present the installation procedure, followed by an explanation of the model download process. And finally we will illustrate the methods for model inference and service deployment through specific examples. ## Installation We recommend building `llama.cpp` from source. The following code snippet provides an example for the Linux CUDA platform. For instructions on other platforms, please refer to the [official guide](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#build). - Step 1: create a conda environment and install cmake ```shell conda create --name interns1 python=3.10 -y conda activate interns1 pip install cmake ``` - Step 2: clone the source code and build the project ```shell git clone --depth=1 https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build -DGGML_CUDA=ON cmake --build build --config Release -j ``` All the built targets can be found in the sub directory `build/bin` In the following sections, we assume that the working directory is at the root directory of `llama.cpp`. ## Download models In the [introduction section](#introduction), we mentioned that this repository includes several models with varying levels of computational precision. You can download the appropriate model based on your requirements. For instance, fp16 gguf files can be downloaded as below: ```shell pip install huggingface-hub huggingface-cli download internlm/Intern-S1-mini-GGUF --include *-f16.gguf --local-dir Intern-S1-mini-GGUF --local-dir-use-symlinks False ``` ## Inference You can use `build/bin/llama-mtmd-cli` for conducting inference. For a detailed explanation of `build/bin/llama-mtmd-cli`, please refer to [this guide](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ### chat example Here is an example of using the thinking system prompt. ```shell system_prompt="<|im_start|>system\nYou are an expert reasoner with extensive experience in all areas. You approach problems through systematic thinking and rigorous reasoning. Your response should reflect deep understanding and precise logical thinking, making your solution path and reasoning clear to others. Please put your thinking process within ... tags.\n<|im_end|>\n" build/bin/llama-mtmd-cli \ --model Intern-S1-mini-GGUF/f16/Intern-S1-mini-f16.gguf  \ --mmproj Intern-S1-mini-GGUF/f16/mmproj-Intern-S1-mini-f16.gguf \ --predict 2048 \ --ctx-size 8192 \ --gpu-layers 100 \ --temp 0.8 \ --top-p 0.8 \ --top-k 50 \ --seed 1024 ``` Then input your question with image input as `/image xxx.jpg`. ## Serving `llama.cpp` provides an OpenAI API compatible server - `llama-server`. You can deploy the model as a service like this: ```shell ./build/bin/llama-server \ --model Intern-S1-mini-GGUF/f16/Intern-S1-mini-f16.gguf \ --mmproj Intern-S1-mini-GGUF/f16/mmproj-Intern-S1-mini-f16.gguf \ --gpu-layers 100 \ --temp 0.8 \ --top-p 0.8 \ --top-k 50 \ --port 8080 \ --seed 1024 ``` At the client side, you can access the service through OpenAI API: ```python from openai import OpenAI client = OpenAI( api_key='YOUR_API_KEY', base_url='http://localhost:8080/v1' ) model_name = client.models.list().data[0].id response = client.chat.completions.create( model=model_name, messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": " provide three suggestions about time management"}, ], temperature=0.8, top_p=0.8 ) print(response) ``` ## Ollama ```shell # install ollama curl -fsSL https://ollama.com/install.sh | sh # fetch model ollama pull internlm/interns1:mini # run model ollama run internlm/interns1:mini # then use openai client to call on http://localhost:11434/v1 ```