Llama-2-7b-mlx / README.md
reach-vb's picture
reach-vb HF staff
afb66d7e3ba0aab833d3f3a3ad9f986de3800735d4145f7acf93cd98f7c67c0d
59f1775
|
raw
history blame
633 Bytes
metadata
pipeline_tag: text-generation
inference: false
tags:
  - facebook
  - meta
  - llama
  - llama-2
  - mlx

Llama 2

Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, in npz format suitable for use in Apple's MLX framework.

Weights have been converted to float16 from the original bfloat16 type, because numpy is not compatible with bfloat16 out of the box.

Please, refer to the original model card for details on Llama 2.