File size: 988 Bytes
1735d30 f5a8bdb c35f2b8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
---
license: llama3.1
language:
- en
tags:
- llama3.1
- macos
- chatpdf
- local
- text-generation-inference
- transformers
- meta
- facebook
- llama
- gguf
- llama.cpp
---
# Llama3.1 8b GGUF format models which can be runned on the MacOS or devices.
# chatpdflocal focuses on the available and excellent tiny LLMs which can be easily runned for chatting PDFs on MacOS, balancing the LLM's effect and inference speed.
# If you are a Mac user, you can download the beautiful ChatPDFLocal MacOS app from [here](www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
# PS. Click [here](https://awa-ai.lemonsqueezy.com/buy/89be07f8-060d-4a8f-a758-f25352773168) to subscribe and you can use ChatPDFLocal for free. The default model used by local LLM is ggml-model-Q3_K_M.gguf, you can also load any customerized open source model that suits your Mac configuration size by inputing huggingface repo, enjoy, thank you!
|