DACTYL Classifiers
Collection
Trained AI-generated text classifiers. Pretrained means using binary cross entropy loss, finetuned refers to deep X-risk optimized classifiers.
•
10 items
•
Updated
{
"training_split": "training",
"evaluation_split": "testing",
"results_path": "bce-tinybert-base.csv",
"num_epochs": 5,
"model_path": "huawei-noah/TinyBERT_General_4L_312D",
"tokenizer": "huawei-noah/TinyBERT_General_4L_312D",
"optimizer": "AdamW",
"optimizer_type": "torch",
"optimizer_args": {
"lr": 2e-05,
"weight_decay": 0.01
},
"loss_fn": "BCEWithLogitsLoss",
"reset_classification_head": false,
"loss_type": "torch",
"loss_fn_args": {},
"needs_loss_fn_as_parameter": false,
"save_path": "ShantanuT01/dactyl-tinybert-pretrained",
"training_args": {
"batch_size": 64,
"needs_sampler": false,
"needs_index": false,
"shuffle": true,
"sampling_rate": null,
"apply_sigmoid": false
},
"best_model_path": "best-tinybert-model"
}
| model | AP Score | AUC Score | OPAUC Score | TPAUC Score |
|---|---|---|---|---|
| DeepSeek-V3 | 0.989355 | 0.996855 | 0.993696 | 0.938917 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-RedditWritingPrompts-testing | 0.0371425 | 0.749778 | 0.641038 | 0 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-abstracts-testing | 0.157916 | 0.898819 | 0.654391 | 0 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-news-testing | 0.00768129 | 0.591508 | 0.517311 | 0 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-reviews-testing | 0.0827563 | 0.921267 | 0.7135 | 0 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-student_essays-testing | 0.00826 | 0.580281 | 0.506719 | 0 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-tweets-testing | 0.456347 | 0.976401 | 0.86447 | 0.105054 |
| claude-3-5-haiku-20241022 | 0.973562 | 0.995183 | 0.982616 | 0.830874 |
| claude-3-5-sonnet-20241022 | 0.985813 | 0.99776 | 0.990951 | 0.912414 |
| gemini-1.5-flash | 0.959554 | 0.993995 | 0.973325 | 0.741471 |
| gemini-1.5-pro | 0.906423 | 0.981521 | 0.940218 | 0.434661 |
| gpt-4o-2024-11-20 | 0.971624 | 0.995329 | 0.981673 | 0.82246 |
| gpt-4o-mini | 0.99553 | 0.999309 | 0.997378 | 0.974682 |
| llama-3.2-90b | 0.929239 | 0.98677 | 0.953935 | 0.556386 |
| llama-3.3-70b | 0.96513 | 0.993237 | 0.976893 | 0.775907 |
| mistral-large-latest | 0.988606 | 0.997797 | 0.993021 | 0.932415 |
| mistral-small-latest | 0.987975 | 0.996807 | 0.992217 | 0.924494 |
| overall | 0.97755 | 0.980085 | 0.953644 | 0.550035 |