Swallow-8Bは追加の日本語継続事前学習により日本語が大変流暢なLlama-3です。
MetaのInstructionモデルとの差分ベクトルマージを行いました。

rinna/llama-3-Swallow-8b + 0.7*(meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B)

詳細はLlama-3-Swallow-8bをご確認ください。

Downloads last month
10
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for aixsatoshi/Llama3-Swallow-8B-instruct-vector-merged

Quantizations
1 model