mav23 commited on
Commit
11d9b33
·
verified ·
1 Parent(s): fd23406

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +206 -0
  3. html-pruner-llama-1b.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ html-pruner-llama-1b.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ base_model: meta-llama/Llama-3.2-1B
6
+ license: apache-2.0
7
+ ---
8
+
9
+ ## Model Information
10
+
11
+ We release the HTML pruner model used in **HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieval Results in RAG Systems**.
12
+
13
+ <p align="left">
14
+ Useful links: 📝 <a href="https://arxiv.org/abs/2411.02959" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/zstanjj/SlimPLM-Query-Rewriting/" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/plageon/SlimPLM" target="_blank">Github</a>
15
+ </p>
16
+
17
+ We propose HtmlRAG, which uses HTML instead of plain text as the format of external knowledge in RAG systems. To tackle the long context brought by HTML, we propose **Lossless HTML Cleaning** and **Two-Step Block-Tree-Based HTML Pruning**.
18
+
19
+ - **Lossless HTML Cleaning**: This cleaning process just removes totally irrelevant contents and compress redundant structures, retaining all semantic information in the original HTML. The compressed HTML of lossless HTML cleaning is suitable for RAG systems that have long-context LLMs and are not willing to loss any information before generation.
20
+
21
+ - **Two-Step Block-Tree-Based HTML Pruning**: The block-tree-based HTML pruning consists of two steps, both of which are conducted on the block tree structure. The first pruning step uses a embedding model to calculate scores for blocks, while the second step uses a path generative model. The first step processes the result of lossless HTML cleaning, while the second step processes the result of the first pruning step.
22
+
23
+
24
+ 🌹 If you use this model, please ✨star our **[GitHub repository](https://github.com/plageon/HTMLRAG)** to support us. Your star means a lot!
25
+
26
+ ## 📦 Installation
27
+
28
+ Install the package using pip:
29
+ ```bash
30
+ pip install htmlrag
31
+ ```
32
+ Or install the package from source:
33
+ ```bash
34
+ pip install -e .
35
+ ```
36
+
37
+ ---
38
+
39
+ ## 📖 User Guide
40
+
41
+ ### 🧹 HTML Cleaning
42
+
43
+ ```python
44
+ from htmlrag import clean_html
45
+
46
+ question = "When was the bellagio in las vegas built?"
47
+ html = """
48
+ <html>
49
+ <head>
50
+ <title>When was the bellagio in las vegas built?</title>
51
+ </head>
52
+ <body>
53
+ <p class="class0">The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
54
+ </body>
55
+ <div>
56
+ <div>
57
+ <p>Some other text</p>
58
+ <p>Some other text</p>
59
+ </div>
60
+ </div>
61
+ <p class="class1"></p>
62
+ <!-- Some comment -->
63
+ <script type="text/javascript">
64
+ document.write("Hello World!");
65
+ </script>
66
+ </html>
67
+ """
68
+
69
+ simplified_html = clean_html(html)
70
+ print(simplified_html)
71
+
72
+ # <html>
73
+ # <title>When was the bellagio in las vegas built?</title>
74
+ # <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
75
+ # <div>
76
+ # <p>Some other text</p>
77
+ # <p>Some other text</p>
78
+ # </div>
79
+ # </html>
80
+ ```
81
+
82
+
83
+ ### 🌲 Build Block Tree
84
+
85
+ ```python
86
+ from htmlrag import build_block_tree
87
+
88
+ block_tree, simplified_html = build_block_tree(simplified_html, max_node_words=10)
89
+ for block in block_tree:
90
+ print("Block Content: ", block[0])
91
+ print("Block Path: ", block[1])
92
+ print("Is Leaf: ", block[2])
93
+ print("")
94
+
95
+ # Block Content: <title>When was the bellagio in las vegas built?</title>
96
+ # Block Path: ['html', 'title']
97
+ # Is Leaf: True
98
+ #
99
+ # Block Content: <div>
100
+ # <p>Some other text</p>
101
+ # <p>Some other text</p>
102
+ # </div>
103
+ # Block Path: ['html', 'div']
104
+ # Is Leaf: True
105
+ #
106
+ # Block Content: <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
107
+ # Block Path: ['html', 'p']
108
+ # Is Leaf: True
109
+ ```
110
+
111
+ ### ✂️ Prune HTML Blocks with Embedding Model
112
+
113
+ ```python
114
+ from htmlrag import EmbedHTMLPruner
115
+
116
+ embed_html_pruner = EmbedHTMLPruner(embed_model="bm25")
117
+ block_rankings = embed_html_pruner.calculate_block_rankings(question, simplified_html, block_tree)
118
+ print(block_rankings)
119
+
120
+ # [0, 2, 1]
121
+
122
+ from transformers import AutoTokenizer
123
+
124
+ chat_tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-70B-Instruct")
125
+
126
+ max_context_window = 60
127
+ pruned_html = embed_html_pruner.prune_HTML(simplified_html, block_tree, block_rankings, chat_tokenizer, max_context_window)
128
+ print(pruned_html)
129
+
130
+ # <html>
131
+ # <title>When was the bellagio in las vegas built?</title>
132
+ # <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
133
+ # </html>
134
+ ```
135
+
136
+
137
+ ### ✂️ Prune HTML Blocks with Generative Model
138
+
139
+ ```python
140
+ from htmlrag import GenHTMLPruner
141
+ import torch
142
+
143
+ ckpt_path = "zstanjj/HTML-Pruner-Llama-1B"
144
+ if torch.cuda.is_available():
145
+ device="cuda"
146
+ else:
147
+ device="cpu"
148
+ gen_embed_pruner = GenHTMLPruner(gen_model=ckpt_path, max_node_words=5, device=device)
149
+ block_rankings = gen_embed_pruner.calculate_block_rankings(question, pruned_html)
150
+ print(block_rankings)
151
+
152
+ # [1, 0]
153
+
154
+ block_tree, pruned_html=build_block_tree(pruned_html, max_node_words=10)
155
+ for block in block_tree:
156
+ print("Block Content: ", block[0])
157
+ print("Block Path: ", block[1])
158
+ print("Is Leaf: ", block[2])
159
+ print("")
160
+
161
+ # Block Content: <title>When was the bellagio in las vegas built?</title>
162
+ # Block Path: ['html', 'title']
163
+ # Is Leaf: True
164
+ #
165
+ # Block Content: <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
166
+ # Block Path: ['html', 'p']
167
+ # Is Leaf: True
168
+
169
+ max_context_window = 32
170
+ pruned_html = gen_embed_pruner.prune_HTML(pruned_html, block_tree, block_rankings, chat_tokenizer, max_context_window)
171
+ print(pruned_html)
172
+
173
+ # <p>The Bellagio is a luxury hotel and casino located on the Las Vegas Strip in Paradise, Nevada. It was built in 1998.</p>
174
+ ```
175
+
176
+ ## Results
177
+
178
+ - **Results for [HTML-Pruner-Phi-3.8B](https://huggingface.co/zstanjj/HTML-Pruner-Phi-3.8B) and [HTML-Pruner-Llama-1B](https://huggingface.co/zstanjj/HTML-Pruner-Llama-1B) with Llama-3.1-70B-Instruct as chat model**.
179
+
180
+ | Dataset | ASQA | HotpotQA | NQ | TriviaQA | MuSiQue | ELI5 |
181
+ |------------------|-----------|-----------|-----------|-----------|-----------|-----------|
182
+ | Metrics | EM | EM | EM | EM | EM | ROUGE-L |
183
+ | BM25 | 49.50 | 38.25 | 47.00 | 88.00 | 9.50 | 16.15 |
184
+ | BGE | 68.00 | 41.75 | 59.50 | 93.00 | 12.50 | 16.20 |
185
+ | E5-Mistral | 63.00 | 36.75 | 59.50 | 90.75 | 11.00 | 16.17 |
186
+ | LongLLMLingua | 62.50 | 45.00 | 56.75 | 92.50 | 10.25 | 15.84 |
187
+ | JinaAI Reader | 55.25 | 34.25 | 48.25 | 90.00 | 9.25 | 16.06 |
188
+ | HtmlRAG-Phi-3.8B | **68.50** | **46.25** | 60.50 | **93.50** | **13.25** | **16.33** |
189
+ | HtmlRAG-Llama-1B | 66.50 | 45.00 | **60.75** | 93.00 | 10.00 | 16.25 |
190
+
191
+ ---
192
+
193
+ ## 📜 Citation
194
+
195
+ ```bibtex
196
+ @misc{tan2024htmlraghtmlbetterplain,
197
+ title={HtmlRAG: HTML is Better Than Plain Text for Modeling Retrieved Knowledge in RAG Systems},
198
+ author={Jiejun Tan and Zhicheng Dou and Wen Wang and Mang Wang and Weipeng Chen and Ji-Rong Wen},
199
+ year={2024},
200
+ eprint={2411.02959},
201
+ archivePrefix={arXiv},
202
+ primaryClass={cs.IR},
203
+ url={https://arxiv.org/abs/2411.02959},
204
+ }
205
+ ```
206
+
html-pruner-llama-1b.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e021b7ad5c1987ae4cf110128073a12179017e6ea3ff1578b0dd8e51bfcf29e
3
+ size 770928544