mistral-ko-7b-tech / README.md
shleeeee's picture
Update README.md
26e7b92 verified
metadata
language:
  - ko
pipeline_tag: text-generation
tags:
  - finetune
license: other

Model Card for mistral-ko-7b-tech

It is a fine-tuned model using Korean in the mistral-7b model.

Model Details

  • Model Developers : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
  • Repository : To be added
  • Model Architecture : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
  • Lora target modules : q_proj, k_proj, v_proj, o_proj,gate_proj
  • train_batch : 4
  • Max_step : 500

Dataset

Korean Custom Dataset(2000)

Prompt template: Mistral

<s>[INST]{['instruction']}[/INST]{['output']}</s>

Usage

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-7b-tech")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-7b-tech")

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="shleeeee/mistral-ko-7b-tech")

Evaluation

image/png