gitllama
#11 opened 3 months ago
by
thiageshhs
Code used to generate the model
#10 opened 6 months ago
by
Proryanator
![](https://cdn-avatars.huggingface.co/v1/production/uploads/6530977ca6a6f2be6f28e324/uEH3eIVYGW0JI7Bp7AThy.png)
How to use provided model?
1
#9 opened 11 months ago
by
Vader20FF
maxContextLength of just 64 tokens
#8 opened about 1 year ago
by
ronaldmannak
![](https://cdn-avatars.huggingface.co/v1/production/uploads/1669947056821-noauth.png)
Unable to load model in SwitChat example
6
#7 opened over 1 year ago
by
ltouati
M1 8G RAM macbook air run at 0.01token/second, something is wrong right?
5
#6 opened over 1 year ago
by
DanielCL
When i try to load the model
4
#5 opened over 1 year ago
by
SriBalaaji
Understanding CoreML conversion of llama 2 7b
15
#4 opened over 1 year ago
by
kharish89