llama3.1-8b-gguf / README.md
Vincent-Lee's picture
Update README.md
c35f2b8 verified
|
raw
history blame
988 Bytes
---
license: llama3.1
language:
- en
tags:
- llama3.1
- macos
- chatpdf
- local
- text-generation-inference
- transformers
- meta
- facebook
- llama
- gguf
- llama.cpp
---
# Llama3.1 8b GGUF format models which can be runned on the MacOS or devices.
# chatpdflocal focuses on the available and excellent tiny LLMs which can be easily runned for chatting PDFs on MacOS, balancing the LLM's effect and inference speed.
# If you are a Mac user, you can download the beautiful ChatPDFLocal MacOS app from [here](www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
# PS. Click [here](https://awa-ai.lemonsqueezy.com/buy/89be07f8-060d-4a8f-a758-f25352773168) to subscribe and you can use ChatPDFLocal for free. The default model used by local LLM is ggml-model-Q3_K_M.gguf, you can also load any customerized open source model that suits your Mac configuration size by inputing huggingface repo, enjoy, thank you!