|
--- |
|
license: llama3.1 |
|
language: |
|
- en |
|
tags: |
|
- llama3.1 |
|
- macos |
|
- chatpdf |
|
- local |
|
- text-generation-inference |
|
- transformers |
|
- meta |
|
- facebook |
|
- llama |
|
- gguf |
|
- llama.cpp |
|
- legal |
|
- llama3.1-8b |
|
- quantization |
|
--- |
|
# Llama3.1 8b GGUF format models which can be runned on the MacOS or devices. |
|
|
|
This repo focuses on the available and excellent tiny LLMs which can be easily runned for chatting PDFs on MacOS, balancing the LLM's effect and inference speed. |
|
|
|
If you are a Mac user, you can download the beautiful ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading. |
|
|
|
PS. Click [here](https://awa-ai.lemonsqueezy.com/buy/89be07f8-060d-4a8f-a758-f25352773168) to subscribe and you can use ChatPDFLocal for free. The default model used by local LLM is ggml-model-Q3_K_M.gguf, you can also load any customerized open source model that suits your Mac configuration size by inputing huggingface repo. |
|
|
|
Enjoy, thank you! |