sonsus commited on
Commit
ca385af
·
1 Parent(s): c6eb598
Files changed (3) hide show
  1. README.md +0 -2
  2. README_en.md +122 -0
  3. pages/quick_start_guide.py +2 -2
README.md CHANGED
@@ -11,8 +11,6 @@ license: cc-by-4.0
11
  short_description: VARCO Arena is a reference-free LLM benchmarking approach
12
  ---
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
15
-
16
  # Varco Arena
17
  Varco Arena conducts tournaments between models to be compared for each test set command, ranking models accurately at an affordable price. This is more accurate and cost-effective than rating win rates by comparing against reference outputs.
18
 
 
11
  short_description: VARCO Arena is a reference-free LLM benchmarking approach
12
  ---
13
 
 
 
14
  # Varco Arena
15
  Varco Arena conducts tournaments between models to be compared for each test set command, ranking models accurately at an affordable price. This is more accurate and cost-effective than rating win rates by comparing against reference outputs.
16
 
README_en.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Varco Arena
2
+ Varco Arena conducts tournaments between models to be compared for each test set command, ranking models accurately at an affordable price. This is more accurate and cost-effective than rating win rates by comparing against reference outputs.
3
+
4
+ For more information, the followings may help understanding how it works.
5
+ * [Paper](https://huggingface.co/papers/2411.01281)
6
+ * [Blog Post (KR)](https://ncsoft.github.io/ncresearch/12cc62c1ea0d981971a8923401e8fe6a0f18563d)
7
+
8
+
9
+ ## Quickstart
10
+ ### Running Web Demo locally (streamlit, Recommended!)
11
+ ```bash
12
+ git clone [THIS_REPO]
13
+ # install requirements below. we recommend miniforge to manage environment
14
+ cd streamlit_app_local
15
+ bash run.sh
16
+ ```
17
+ For more details, see `[THIS_REPO]/streamlit_app_local/README.md`
18
+
19
+ ### CLI use
20
+ * located at
21
+ * `varco_arena/`
22
+ * debug configurations for vscode at
23
+ * `varco_arena/.vscode`
24
+ ```bash
25
+ ## gpt-4o-mini as a judge
26
+ python main.py -i "./some/dirpath/to/jsonl/files" -o SOME_REL_PATH_TO_CREATE -m tournament -e "gpt-4o-mini"
27
+ ## vllm-openai served LLM as a judge
28
+ python main.py -i "./some/dirpath/to/jsonl/files" -o SOME_REL_PATH_TO_CREATE -e SOME_MODEL_NAME_SERVED -m tournament -u "http://url_to/your/vllm_openai_server:someport"
29
+
30
+ # dbg lines
31
+ ## openai api judge dbg
32
+ python main.py -i "rsc/inputs_for_dbg/dbg_400_error_inputs/" -o SOME_WANTED_TARGET_DIR -e gpt-4o-mini
33
+ ## other testing lines
34
+ python main.py -i "rsc/inputs_for_dbg/[SOME_DIRECTORY]/" -o SOME_WANTED_TARGET_DIR -e gpt-4o-mini
35
+ ## dummy judge dbg (checking errors without api requests)
36
+ python main.py -i "rsc/inputs_for_dbg/dbg_400_error_inputs/" -o SOME_WANTED_TARGET_DIR -e debug
37
+ ```
38
+
39
+ ## Requirements
40
+ We tested this on `python = 3.11.9` env: `requirements.txt`
41
+ ```
42
+ openai>=1.17.0
43
+ munch
44
+ pandas
45
+ numpy
46
+ tqdm>=4.48.0
47
+ plotly
48
+ scikit-learn
49
+ kaleido
50
+ tiktoken>=0.7.0
51
+ pyyaml
52
+ transformers
53
+ streamlit>=1.40.2
54
+ openpyxl
55
+ fire==0.6.0
56
+ git+https://github.com/shobrook/openlimit.git#egg=openlimit # do not install this by pypi
57
+
58
+ # Linux
59
+ uvloop
60
+ # Windows
61
+ winloop
62
+ ```
63
+
64
+ #### Argument
65
+ - -i, --input : directory path which contains input jsonlines files (llm outputs)
66
+ - -o, --output_dir : directory where results to be put
67
+ - -e, --evaluation : judge model specification (e.g. "gpt-4o-2024-05-13", "gpt-4o-mini", \[vllm-served-model-name\])
68
+ - -k, --openai_api_key : OpenAI API Key
69
+ - -u, --openai_url: URL to openai_styled_llm_server (requested by openai sdk)
70
+
71
+ #### advanced
72
+ - -j, --n_jobs : n jobs to be put to `asyncio.semaphore(n=)`
73
+ - -p, --evalprompt : [see the directory](./varco_arena/prompts/*.yaml)
74
+ - -lr, --limit_requests : vLLM OpenAI server request limit (default: 7,680)
75
+ - -lt, --limit_tokens : vLLM OpenAI server token limit (default: 15,728,640)
76
+
77
+ #### Input Data Format
78
+ [input jsonl guides](./streamlit_app_local/guide_mds/input_jsonls_en.md)
79
+
80
+
81
+ ## Contributing & Customizing
82
+ #### Do this after git clone and installation
83
+ ```bash
84
+ pip install pre-commit
85
+ pre-commit install
86
+ ```
87
+ #### before commit
88
+ ```bash
89
+ bash precommit.sh # black formatter will reformat the codes
90
+ ```
91
+
92
+ ## FAQ
93
+ * I want to apply my custom judge prompt to run Varco Arena
94
+ * [`./varco_arena/prompts/`](./varco_arena/prompts/__init__.py) defines the prompts with `yaml` file and the class objects for those. Edit those as your need.
95
+ * I want tailored judge prompts for each line of the test set row (i.e. ~100th row - `prompt1`, 101st~ - `prompt2`)
96
+ * You could see `load_prompt` at the above link receives `promptname` + `task` as a parameters to load the prompt. The function is called at [`./varco_arena/manager.py:async_run`](./varco_arena/manager.py).
97
+ * I want more fields for my llm outputs jsonl files for tailored use, i.e. want more fields beyond `instruction`, `source`, `generated`.
98
+ * It's going to get tricky but let me briefly guide you about this.
99
+ * You might have to edit `varco_arena/eval_utils.py`:`async_eval_w_prompt` (this part calls `PROMPT_OBJ.complete_prompt()`)
100
+ * And all the related codes will require revision.
101
+
102
+ ## Special Thanks to (contributors)
103
+ - Minho Lee (@Dialogue Model Team, NCSOFT) [github](https://github.com/minolee/)
104
+ - query wrapper
105
+ - rag prompt
106
+ - Jumin Oh (@Generation Model Team, NCSOFT)
107
+ - overall prototyping of the system in haste
108
+
109
+
110
+ ## Citation
111
+ If you found our work helpful, consider citing our paper!
112
+ ```
113
+ @misc{son2024varcoarenatournamentapproach,
114
+ title={Varco Arena: A Tournament Approach to Reference-Free Benchmarking Large Language Models},
115
+ author={Seonil Son and Ju-Min Oh and Heegon Jin and Cheolhun Jang and Jeongbeom Jeong and Kuntae Kim},
116
+ year={2024},
117
+ eprint={2411.01281},
118
+ archivePrefix={arXiv},
119
+ primaryClass={cs.CL},
120
+ url={https://arxiv.org/abs/2411.01281},
121
+ }
122
+ ```
pages/quick_start_guide.py CHANGED
@@ -9,6 +9,6 @@ set_nav_bar(
9
 
10
 
11
  if st.session_state.korean:
12
- st.markdown(open("varco_arena/README_kr.md").read())
13
  else:
14
- st.markdown(open("varco_arena/README_en.md").read())
 
9
 
10
 
11
  if st.session_state.korean:
12
+ st.markdown(open("README_kr.md").read())
13
  else:
14
+ st.markdown(open("README_en.md").read())