Model Card for mistral-ko-7b-tech

It is a fine-tuned model using Korean in the mistral-7b model.

Model Details

  • Model Developers : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park)
  • Repository : To be added
  • Model Architecture : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1.
  • Lora target modules : q_proj, k_proj, v_proj, o_proj,gate_proj
  • train_batch : 4
  • Max_step : 500

Dataset

Korean Custom Dataset(2000)

Prompt template: Mistral

<s>[INST]{['instruction']}[/INST]{['output']}</s>

Usage

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-ko-7b-tech")
model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-ko-7b-tech")

# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="shleeeee/mistral-ko-7b-tech")

Evaluation

image/png

Downloads last month
112
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for shleeeee/mistral-ko-7b-tech

Quantizations
2 models

Spaces using shleeeee/mistral-ko-7b-tech 6