| license: other | |
| tags: | |
| - generated_from_trainer | |
| - mlx | |
| base_model: meta-llama/Meta-Llama-3-8B | |
| datasets: | |
| - cognitivecomputations/Dolphin-2.9 | |
| - teknium/OpenHermes-2.5 | |
| - m-a-p/CodeFeedback-Filtered-Instruction | |
| - cognitivecomputations/dolphin-coder | |
| - cognitivecomputations/samantha-data | |
| - HuggingFaceH4/ultrachat_200k | |
| - microsoft/orca-math-word-problems-200k | |
| - abacusai/SystemChat-1.1 | |
| - Locutusque/function-calling-chatml | |
| - internlm/Agent-FLAN | |
| model-index: | |
| - name: out | |
| results: [] | |
| # mlx-community/dolphin-2.9-llama3-8b-unquantized | |
| This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9-llama3-8b`]() using mlx-lm version **0.10.0**. | |
| Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) for more details on the model. | |
| ## Use with mlx | |
| ```bash | |
| pip install mlx-lm | |
| ``` | |
| ```python | |
| from mlx_lm import load, generate | |
| model, tokenizer = load("mlx-community/dolphin-2.9-llama3-8b-unquantized") | |
| response = generate(model, tokenizer, prompt="hello", verbose=True) | |
| ``` | |