Medchator-2x7b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
cd2cf34 verified
|
raw
history blame
7.99 kB
metadata
license: apache-2.0
tags:
  - moe
  - merge
  - AdaptLLM/medicine-chat
  - microsoft/Orca-2-7b
datasets:
  - open-llm-leaderboard/details_Technoculture__Medchator-2x7b
model-index:
  - name: Medchator-2x7b
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 57.59
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Technoculture/Medchator-2x7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 78.14
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Technoculture/Medchator-2x7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 56.13
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Technoculture/Medchator-2x7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 48.77
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Technoculture/Medchator-2x7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 75.3
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Technoculture/Medchator-2x7b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 32.83
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Technoculture/Medchator-2x7b
          name: Open LLM Leaderboard

Medchator-2x7b

Medchator-2x7b is a Mixure of Experts (MoE) made with the following models:

Evaluations

Open LLM Leaderboard

image/png

Model Name ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
Orca-2-7b 78.4 76.1 53.7 52.4 74.2 47.2
LLAMA-2-7b 43.2 77.1 44.4 38.7 69.5 16
MT7Bi-sft 54.1 75.11 - 43.08 72.14 15.54
MT7bi-dpo 54.69 75.89 52.82 45.48 71.58 25.93
Medorca-2x7b 54.1 76.04 54.1 48.04 74.51 20.64
Medchator-2x7b 57.59 78.14 56.13 48.77 75.3 32.83

Medical Performance

Clinical Camel demonstrates competitive performance on medical benchmarks.

Table: Five-Shot Performance of GPT3.5, llama-2-7b and Llama-2-70b on Various Medical Datasets

Dataset Medchator-2x7b GPT3.5 Llama-2 7b Llama-2 70b
MMLU Anatomy 56.3 60.7 48.9 62.9
MMLU Clinical Knowledge 63.0 68.7 46.0 71.7
MMLU College Biology 63.8 72.9 47.2 84.7
MMLU College Medicine 50.9 63.6 42.8 64.2
MMLU Medical Genetics 67.0 68.0 55.0 74.0
MMLU Professional Medicine 55.1 69.8 53.6 75.0

🧩 Configuration

base_model: microsoft/Orca-2-7b
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: AdaptLLM/medicine-chat
    positive_prompts: 
      - "How does sleep affect cardiovascular health?"
      - "Could a plant-based diet improve arthritis symptoms?"
      - "A patient comes in with symptoms of dizziness and nausea"
      - "When discussing diabetes management, the key factors to consider are"
      - "The differential diagnosis for a headache with visual aura could include"
    negative_prompts:
      - "Recommend a good recipe for a vegetarian lasagna."
      - "Give an overview of the French Revolution."
      - "Explain how a digital camera captures an image."
      - "What are the environmental impacts of deforestation?"
      - "The recent advancements in artificial intelligence have led to developments in"
      - "The fundamental concepts in economics include ideas like supply and demand, which explain"
  - source_model: microsoft/Orca-2-7b
    positive_prompts:
      - "Here is a funny joke for you -"
      - "When considering the ethical implications of artificial intelligence, one must take into account"
      - "In strategic planning, a company must analyze its strengths and weaknesses, which involves"
      - "Understanding consumer behavior in marketing requires considering factors like"
      - "The debate on climate change solutions hinges on arguments that"
    negative_prompts:
      - "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize"
      - "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for"
      - "Explaining the importance of vaccination, a healthcare professional should highlight"

💻 Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Technoculture/Medchator-2x7b"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 58.13
AI2 Reasoning Challenge (25-Shot) 57.59
HellaSwag (10-Shot) 78.14
MMLU (5-Shot) 56.13
TruthfulQA (0-shot) 48.77
Winogrande (5-shot) 75.30
GSM8k (5-shot) 32.83