Is there a way to run this model locally on a phone yet? or I need to wait a compatibility with llama.cpp?
@Lxdro Thanks for letting us know the issue. You could also post feature requests in the llama.cpp repo.
· Sign up or log in to comment