--- language: - ko pipeline_tag: text-generation tags: - finetune --- # Model Card for mistral-ko-7b-tech It is a fine-tuned model using Korean and NEFT in the mistral-7b model. ## Model Details * **Model Developers** : shleeeee(Seunghyeon Lee) * **Repository** : To be added * **Model Architecture** : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1. * **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj * **train_batch** : 4 * **Max_step** : 500 ## Dataset Korean Custom Dataset ## Prompt template: Mistral ``` [INST]{['instruction']}[/INST]{['output']} ``` ## Usage ``` # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-7b-tech") model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-7b-tech") # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="shleeeee/mistral-ko-7b-tech") ``` ## Evaluation ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654495fa893aec5da96e9134/6z75dYa8TdTy4Y7EIl0CK.png)