CoreML conversion of Llama-3.2-3B-Instruct with a 512 context length. Optimized for Apple Neural Engine.

Use this CLI to download and run inference. macOS 14 (Sonoma) is required.

This model will likley run slowly or not at all on M1 Macs and phones. Consider trying the 1B model for those devices: smpanaro/Llama-3.2-1B-Instruct-CoreML

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for smpanaro/Llama-3.2-3B-Instruct-CoreML

Finetuned
(228)
this model

Collection including smpanaro/Llama-3.2-3B-Instruct-CoreML