|
--- |
|
license: mit |
|
--- |
|
|
|
For academic reference, cite the following paper: https://ieeexplore.ieee.org/document/10223689 |
|
|
|
This is the dataset used to post-train the [BERTweet](https://huggingface.co/cardiffnlp/twitter-roberta-base) language model on a Masked Language Modeling (MLM) task, resulting in the [CryptoBERT](https://huggingface.co/ElKulako/cryptobert) language model. |
|
|
|
The dataset contains 3.207 million unique posts from the language domain of cryptocurrency-related social media text. |
|
|
|
The dataset contains 1.865 million StockTwits posts, 496 thousand tweets, 172 thousand Reddit comments and 664 thousand Telegram messages. |