File size: 1,232 Bytes
1735d30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5fde622
 
 
f5a8bdb
1988584
f602761
1b650ff
f602761
354cb7d
7408907
c5ec7c9
7408907
67eeb2c
0cac59e
b3adc5e
5fde622
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: llama3.1
language:
- en
tags:
- llama3.1
- macos
- chatpdf
- local
- text-generation-inference
- transformers
- meta
- facebook
- llama
- gguf
- llama.cpp
- legal
- llama3.1-8b
- quantization
---
# Llama3.1 8b Instruct GGUF format models which can be runned on the PCs of MacOS, Windows or Linux, Cell phones and smaller devices.

This repo focuses on the available and excellent tiny LLMs which can be easily runned for chatting PDFs on MacOS, balancing the LLM's effect and inference speed.

## If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively:

- If you are using Zotero for managing and reading your personal PDFs, [PapersGPT](https://www.papersgpt.com) is a free plugin which can assist you to chat PDFs effectively by your local llama3.1.

- you can download the beautiful ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
The default model used by local LLM is ggml-model-Q3_K_M.gguf, you can also load any customerized open source model that suits your Mac configuration size by inputing huggingface repo.

Enjoy, thank you!