nielsr HF staff commited on
Commit
cd40ed6
·
verified ·
1 Parent(s): c3e6242

Add metadata and paper link

Browse files

This PR adds the correct library name, pipeline tag and link to paper to the model card.

Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -9,6 +9,8 @@ datasets:
9
  - codingsteven/Llama-3-8B-chat
10
  language:
11
  - zh
 
 
12
  base_model:
13
  - meta-llama/Llama-3.1-8B
14
  model-index:
@@ -71,12 +73,15 @@ model-index:
71
  value: 0.37368330167501296
72
  stderr: 0.00438421288652232
73
  verified: false
 
 
74
  ---
 
75
  # Control-LLM-Llama3.1-8B-SynE-Concat16-Lerp
76
  This is a fine-tuned model of Llama-3.1-8B for muliligual-Chinese tasks on SynE dataset by Control LLM-Concat16-Lerp.
77
 
78
  ## Linked Paper
79
- This model is associated with the paper: [Control-LLM](https://arxiv.org/abs/2501.10979).
80
 
81
  ## Evaluation Results
82
  Here is an overview of the evaluation results and findings:
@@ -107,4 +112,4 @@ The table below summarizes evaluation results across Chinese tasks and original
107
  - **MLU**: MMLU (Massive Multitask Language Understanding)
108
  - **MLUP**: MMLU Pro
109
  - **O-Avg**: Original Capability - Size Weighted Average across BBH, MLU, and MLUP
110
- - **Overall**: Combined average across all tasks
 
9
  - codingsteven/Llama-3-8B-chat
10
  language:
11
  - zh
12
+ metrics:
13
+ - accuracy
14
  base_model:
15
  - meta-llama/Llama-3.1-8B
16
  model-index:
 
73
  value: 0.37368330167501296
74
  stderr: 0.00438421288652232
75
  verified: false
76
+ pipeline_tag: text-generation
77
+ library_name: transformers
78
  ---
79
+
80
  # Control-LLM-Llama3.1-8B-SynE-Concat16-Lerp
81
  This is a fine-tuned model of Llama-3.1-8B for muliligual-Chinese tasks on SynE dataset by Control LLM-Concat16-Lerp.
82
 
83
  ## Linked Paper
84
+ This model is associated with the paper: [Control LLM: Controlled Evolution for Intelligence Retention in LLM](https://huggingface.co/papers/2501.10979).
85
 
86
  ## Evaluation Results
87
  Here is an overview of the evaluation results and findings:
 
112
  - **MLU**: MMLU (Massive Multitask Language Understanding)
113
  - **MLUP**: MMLU Pro
114
  - **O-Avg**: Original Capability - Size Weighted Average across BBH, MLU, and MLUP
115
+ - **Overall**: Combined average across all tasks