2.2bpw (high quality loss, only for 24GB vRAM test.)
4.0bpw
6.0bpw
8.0bpw

Llama-3-Swallow-70B-Instruct-v0.1-exl2

Prompt template

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

あなたは誠実で優秀な日本人のアシスタントです。<|eot_id|><|start_header_id|>user<|end_header_id|>

東京の夜空に打ち上がっている花火の下、向かい合っている燕とラマの温かい物語を書いてください。<|eot_id|><|start_header_id|>assistant<|end_header_id|>

License

META LLAMA 3 COMMUNITY LICENSE

Citations

@misc{llama3swallow,
      title={Llama 3 Swallow},
      url={https://swallow-llm.github.io/llama3-swallow.en.html},
      author={Swallow LLM},
      year={2024},
}
@article{llama3modelcard,
    title={Llama 3 Model Card},
    author={AI@Meta},
    year={2024},
    url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for RioShiina/Llama-3-Swallow-70B-Instruct-v0.1-exl2

Quantized
(3)
this model