Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ This model is an adaptively fine-tuned version of Llama-2-7B-Instruct optimized
|
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
-
This model is a fine-tuned version of Llama-2-7B-Instruct that has been optimized using Direct Preference Optimization (DPO) to evade the EXP watermarking method described in Aaronson and Kirchner (2023). The model preserves text quality while modifying the statistical patterns that watermarking methods rely on for detection.
|
19 |
|
20 |
- **Model type:** Decoder-only transformer language model
|
21 |
- **Language(s):** English
|
|
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
+
This model is a fine-tuned version of Llama-2-7B-Instruct that has been optimized using Direct Preference Optimization (DPO) to evade the [EXP watermarking method](https://www.scottaaronson.com/talks/watermark.ppt) described in Aaronson and Kirchner (2023). The model preserves text quality while modifying the statistical patterns that watermarking methods rely on for detection.
|
19 |
|
20 |
- **Model type:** Decoder-only transformer language model
|
21 |
- **Language(s):** English
|