oneonlee's picture
Update README.md
06f17be verified
|
raw
history blame
3.79 kB
metadata
language:
  - en
  - ko
license: cc-by-nc-4.0
datasets:
  - kyujinpy/KOR-gugugu-platypus-set
base_model:
  - yanolja/KoSOLAR-10.7B-v0.2
pipeline_tag: text-generation

KoSOLAR-v0.2-gugutypus-10.7B


Model Details

Model Developers

Model Architecture

  • KoSOLAR-v0.2-gugutypus-10.7B is a instruction fine-tuned auto-regressive language model, based on the SOLAR transformer architecture.

Base Model

Training Dataset

Environments

  • Google Colab (Pro)
    • GPU : NVIDIA A100 40GB

Model comparisons

  • Ko-LLM leaderboard (YYYY/MM/DD) [link]
Model Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
KoSOLAR-gugutypus NaN NaN NaN NaN NaN NaN

  • (ENG) AI-Harness evaluation [link]
Tasks Version Filter n-shot Metric Value Stderr
HellaSwag 1 none 0 acc 0.6075 ± 0.0049
HellaSwag 1 none 5 acc ±
BoolQ 2 none 0 acc 0.8737 ± 0.0058
BoolQ 2 none 5 acc ±
COPA 1 none 0 acc 0.8300 ± 0.0378
COPA 1 none 5 acc ±
MMLU N/A none 0 acc 0.5826 ± 0.1432
MMLU N/A none 5 acc ±
  • (KOR) AI-Harness evaluation [link]
Tasks Version Filter n-shot Metric Value Stderr
KoBEST-HellaSwag none 0 acc ±
KoBEST-HellaSwag none 5 acc ±
KoBEST-BoolQ none 0 acc ±
KoBEST-BoolQ none 5 acc ±
KoBEST-COPA none 0 acc ±
KoBEST-COPA none 5 acc ±
KoBEST-SentiNeg none 0 acc ±
KoBEST-SentiNeg none 5 acc ±
KoBEST-MMLU none 0 acc ±
KoBEST-MMLU none 5 acc ±

Implementation Code

### KoSOLAR-gugutypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "oneonlee/KoSOLAR-v0.2-gugutypus-10.7B"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)