Update README.md
Browse files
README.md
CHANGED
|
@@ -69,4 +69,19 @@ torchrun --nproc_per_node=8 --master_port=36646 train_alignment.py \
|
|
| 69 |
|
| 70 |
Although this project aims to better align current LMs with social norms, inappropriate content and inherent biases in the training data will still impair the alignment of the model.
|
| 71 |
|
| 72 |
-
The model should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
Although this project aims to better align current LMs with social norms, inappropriate content and inherent biases in the training data will still impair the alignment of the model.
|
| 71 |
|
| 72 |
+
The model should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application.
|
| 73 |
+
|
| 74 |
+
# Citation
|
| 75 |
+
|
| 76 |
+
Please cite our paper if you use the data or code in this repo:
|
| 77 |
+
|
| 78 |
+
```bibtex
|
| 79 |
+
@misc{liu2023sociallyaligned,
|
| 80 |
+
title={Training Socially Aligned Language Models in Simulated Human Society},
|
| 81 |
+
author={Ruibo Liu and Ruixin Yang and Chenyan Jia and Ge Zhang and Denny Zhou and Andrew M. Dai and Diyi Yang and Soroush Vosoughi},
|
| 82 |
+
year={2023},
|
| 83 |
+
eprint={2305.16960},
|
| 84 |
+
archivePrefix={arXiv},
|
| 85 |
+
primaryClass={cs.CL}
|
| 86 |
+
}
|
| 87 |
+
```
|