--- license: mit base_model: open-neo/Kyro-n1-14B library_name: transformers language: - en - zh - fr - es - pt - de - it - ru - ja - ko - vi - th - ar - fa - he - tr - cs - pl - hi - bn - ur - id - ms - lo - my - ceb - km - tl - nl tags: - trl - Reasoning - open-llm - synthetic-data - Deepseek-R1 - Qwen2.5 - fine-tune - unsloth - Conversational - Agentic - mlx - mlx-my-repo --- # KYUNGYONG/Kyro-n1-14B-4bit The Model [KYUNGYONG/Kyro-n1-14B-4bit](https://huggingface.co/KYUNGYONG/Kyro-n1-14B-4bit) was converted to MLX format from [open-neo/Kyro-n1-14B](https://huggingface.co/open-neo/Kyro-n1-14B) using mlx-lm version **0.21.5**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("KYUNGYONG/Kyro-n1-14B-4bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```