metadata
license: apache-2.0
tags:
- gpt-j
- llm
datasets:
- EleutherAI/pile
MaryGPT Model Card
MaryGPT is a is a text generation model and a fine-tuned version of GPT-J 6B.
This model is fine-tuned exclusively on text from Mary Shelley's 1818 novel "Frankenstein; or, The Modern Prometheus".
This will be used as a base model for AI Artist Yuma Kishi👤’s activity.
Training Data Sources
All data was obtained ethically and in compliance with the site's terms and conditions. No copyright images are used in the training of this model without the permission. No AI generated images are in the dataset.
- GPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI.
- Frankenstein; or, The Modern Prometheus, 1818 (Public domain)
Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
Developed by
MaryGPT
GPT-J
- James Bradbury for valuable assistance with debugging JAX issues.
- Stella Biderman, Eric Hallahan, Kurumuz, and Finetune for converting the model to be compatible with the
transformers
package. - Leo Gao for running zero shot evaluations for the baseline models for the table.
- Laurence Golding for adding some features to the web demo.
- Aran Komatsuzaki for advice with experiment design and writing the blog posts.
- Janko Prester for creating the web demo frontend.