vanilla1116 nielsr HF staff commited on
Commit
5ddfd32
·
verified ·
1 Parent(s): 81071c6

Add missing metadata and clarify license (#1)

Browse files

- Add missing metadata and clarify license (d9b5b3a1d17b6d7ae0e766acf993d966f6c45857)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +22 -4
README.md CHANGED
@@ -1,6 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
1
  # ANAH: Analytical Annotation of Hallucinations in Large Language Models
2
 
3
- [![arXiv](https://img.shields.io/badge/arXiv-2312.14033-b31b1b.svg)](https://arxiv.org/abs/2405.20315)
4
  [![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](./LICENSE)
5
 
6
  This page holds the InternLM2-7B model which is trained with the ANAH dataset. It is fine-tuned to annotate the hallucination in LLM's responses.
@@ -14,8 +25,12 @@ You have to follow the prompt in [our paper](https://arxiv.org/abs/2405.20315) t
14
  The models follow the conversation format of InternLM2-chat, with the template protocol as:
15
 
16
  ```python
17
- dict(role='user', begin='<|im_start|>user\n', end='<|im_end|>\n'),
18
- dict(role='assistant', begin='<|im_start|>assistant\n', end='<|im_end|>\n'),
 
 
 
 
19
  ```
20
 
21
  ## 🖊️ Citation
@@ -27,4 +42,7 @@ If you find this project useful in your research, please consider citing:
27
  author={Ji, Ziwei and Gu, Yuzhe and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
28
  journal={arXiv preprint arXiv:2405.20315},
29
  year={2024}
30
- }
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-classification
5
+ tags:
6
+ - hallucination-detection
7
+ - text-classification
8
+ language:
9
+ - en
10
+ ---
11
+
12
  # ANAH: Analytical Annotation of Hallucinations in Large Language Models
13
 
14
+ [![arXiv](https://img.shields.io/badge/arXiv-2405.20315-b31b1b.svg)](https://arxiv.org/abs/2405.20315)
15
  [![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](./LICENSE)
16
 
17
  This page holds the InternLM2-7B model which is trained with the ANAH dataset. It is fine-tuned to annotate the hallucination in LLM's responses.
 
25
  The models follow the conversation format of InternLM2-chat, with the template protocol as:
26
 
27
  ```python
28
+ dict(role='user', begin='<|im_start|>user
29
+ ', end='<|im_end|>
30
+ '),
31
+ dict(role='assistant', begin='<|im_start|>assistant
32
+ ', end='<|im_end|>
33
+ '),
34
  ```
35
 
36
  ## 🖊️ Citation
 
42
  author={Ji, Ziwei and Gu, Yuzhe and Zhang, Wenwei and Lyu, Chengqi and Lin, Dahua and Chen, Kai},
43
  journal={arXiv preprint arXiv:2405.20315},
44
  year={2024}
45
+ }
46
+ ```
47
+
48
+ Code: The source code for training and evaluating this model can be found at https://github.com/open-compass/ANAH