File size: 1,024 Bytes
1735d30 5fde622 f5a8bdb f602761 1b650ff f602761 1bd7827 f602761 b3adc5e 5fde622 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
license: llama3.1
language:
- en
tags:
- llama3.1
- macos
- chatpdf
- local
- text-generation-inference
- transformers
- meta
- facebook
- llama
- gguf
- llama.cpp
- legal
- llama3.1-8b
- quantization
---
# Llama3.1 8b GGUF format models which can be runned on the MacOS or devices.
This repo focuses on the available and excellent tiny LLMs which can be easily runned for chatting PDFs on MacOS, balancing the LLM's effect and inference speed.
If you are a Mac user, you can download the beautiful ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
PS. Click [here](https://awa-ai.lemonsqueezy.com/buy/89be07f8-060d-4a8f-a758-f25352773168) to subscribe and you can use ChatPDFLocal for free. The default model used by local LLM is ggml-model-Q3_K_M.gguf, you can also load any customerized open source model that suits your Mac configuration size by inputing huggingface repo.
Enjoy, thank you! |