|
--- |
|
license: llama3.1 |
|
language: |
|
- en |
|
tags: |
|
- llama3.1 |
|
- macos |
|
- chatpdf |
|
- local |
|
- text-generation-inference |
|
- transformers |
|
- meta |
|
- facebook |
|
- llama |
|
- gguf |
|
- llama.cpp |
|
- legal |
|
- llama3.1-8b |
|
- quantization |
|
--- |
|
# Llama3.1 8b Instruct GGUF format models which can be runned on the PCs of MacOS, Windows or Linux, Cell phones and smaller devices. |
|
|
|
This repo focuses on the available and excellent tiny LLMs which can be easily runned for chatting PDFs on MacOS, balancing the LLM's effect and inference speed. |
|
|
|
## If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively: |
|
|
|
- If you are using Zotero for managing and reading your personal PDFs, [PapersGPT](https://www.papersgpt.com) is a free plugin which can assist you to chat PDFs effectively by your local llama3.1. |
|
|
|
- you can download the beautiful ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading. |
|
The default model used by local LLM is ggml-model-Q3_K_M.gguf, you can also load any customerized open source model that suits your Mac configuration size by inputing huggingface repo. |
|
|
|
Enjoy, thank you! |