--- language: ar --- # Sanaa ## Arabic GPT-2 demo This is a small GPT-2 model retrained on Arabic Wikipedia circa September 2020 (due to memory limits, the first 600,000 lines of the Wiki dump) Training notebook: https://colab.research.google.com/drive/1Z_935vTuZvbseOsExCjSprrqn1MsQT57 Steps to training: - Follow beginning of Pierre Guillou's Portuguese GPT-2 notebook: https://github.com/piegu/fastai-projects/blob/master/finetuning-English-GPT2-any-language-Portuguese-HuggingFace-fastaiv2.ipynb to download Arabic Wikipedia and run WikiExtractor - Read Beginner's Guide by Ng Wai Foong https://medium.com/@ngwaifoong92/beginners-guide-to-retrain-gpt-2-117m-to-generate-custom-text-content-8bb5363d8b7f - Following Ng Wai Foong's instructions, create an encoded .npz corpus (this was very small in my project and would be improved by adding many X more training data) - Run generate_unconditional_samples.py and other sample code to generate text - Download TensorFlow checkpoints - Use my notebook code to write vocab.json, empty merge.txt - Copy config.json from similar GPT-2 arch, edit for changes as needed ```python am = AutoModel.from_pretrained('./argpt', from_tf=True) am.save_pretrained("./") ```