Create REPORT4_Modifications_1+2+3 _PiT_Training_Results_in_Colab
Browse files
REPORT4_Modifications_1+2+3 _PiT_Training_Results_in_Colab
ADDED
|
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
In PiT V3.0, I modified the vanilla PiT model in a first undisclosed manner plus a second dislosed manner, plus a third dislosed manner essentially doubling the original Vanilla ViT parameter count, and the training results greatly improved, and the two-ways modified model was consistently ahead of the unmodified Vanilla PiT model.
|
| 2 |
+
My threee-ways modified V3.0 PiT model exceeded the vanilla and double-modded PiT Val Accuracy by epoch 8 (exceeding the final/highest 94.75% Val Accuracy of the vanilla PiT model at Epoch ____).
|
| 3 |
+
By Epoch 15, the three-way modified PiT model had the highest Val Accuracy 96.15% (aurpassing the Vanilla PiT models)
|
| 4 |
+
By Epoch 15, the three-ways modified PiT model exceeded 96% Val Accuracy,
|
| 5 |
+
And Val Accuracy increased up to the highest Val Accuracy of 96.70% (exceeding the Vanilla PiT and the Mod1+Mod2 PiT models before the training was hardcoded terminated Epoch 25.
|
| 6 |
+
|
| 7 |
+
--- Configuration V3.0 ---
|
| 8 |
+
train_file: /content/sample_data/mnist_train_small.csv
|
| 9 |
+
test_file: /content/sample_data/mnist_test.csv
|
| 10 |
+
image_size: 28
|
| 11 |
+
num_classes: 10
|
| 12 |
+
embed_dim: XXX
|
| 13 |
+
num_layers: X
|
| 14 |
+
num_heads: X
|
| 15 |
+
mlp_dim: XXXX
|
| 16 |
+
dropout: 0.1
|
| 17 |
+
batch_size: 128
|
| 18 |
+
epochs: 25
|
| 19 |
+
learning_rate: 0.0001
|
| 20 |
+
XXXX
|
| 21 |
+
device: cuda
|
| 22 |
+
image_height: 28
|
| 23 |
+
image_width: 28
|
| 24 |
+
sequence_length: 784
|
| 25 |
+
------------------------------------------------------
|
| 26 |
+
|
| 27 |
+
Data loaded. Training on cuda.
|
| 28 |
+
Training samples: 17999
|
| 29 |
+
Validation samples: 2000
|
| 30 |
+
Test samples: 9999
|
| 31 |
+
|
| 32 |
+
Model V3.0 initialized with 2,7XX,XXX trainable parameters.
|
| 33 |
+
|
| 34 |
+
--- Starting Training (V3.0 Model) ---
|
| 35 |
+
Epoch 01/25 | Train Loss: 2.1971 | Val Loss: 1.9749 | Val Acc: 24.40%
|
| 36 |
+
-> New best validation accuracy! Saving model state.
|
| 37 |
+
Epoch 02/25 | Train Loss: 1.6013 | Val Loss: 0.9542 | Val Acc: 68.20%
|
| 38 |
+
-> New best validation accuracy! Saving model state.
|
| 39 |
+
Epoch 03/25 | Train Loss: 0.8384 | Val Loss: 0.6123 | Val Acc: 79.80%
|
| 40 |
+
-> New best validation accuracy! Saving model state.
|
| 41 |
+
Epoch 04/25 | Train Loss: 0.5816 | Val Loss: 0.4429 | Val Acc: 86.30%
|
| 42 |
+
-> New best validation accuracy! Saving model state.
|
| 43 |
+
Epoch 05/25 | Train Loss: 0.4547 | Val Loss: 0.3766 | Val Acc: 88.70%
|
| 44 |
+
-> New best validation accuracy! Saving model state.
|
| 45 |
+
Epoch 06/25 | Train Loss: 0.3752 | Val Loss: 0.2995 | Val Acc: 90.90%
|
| 46 |
+
-> New best validation accuracy! Saving model state.
|
| 47 |
+
Epoch 07/25 | Train Loss: 0.3099 | Val Loss: 0.2500 | Val Acc: 92.45%
|
| 48 |
+
-> New best validation accuracy! Saving model state.
|
| 49 |
+
Epoch 08/25 | Train Loss: 0.2781 | Val Loss: 0.2305 | Val Acc: 92.95%
|
| 50 |
+
-> New best validation accuracy! Saving model state.
|
| 51 |
+
Epoch 09/25 | Train Loss: 0.2576 | Val Loss: 0.2144 | Val Acc: 93.25%
|
| 52 |
+
-> New best validation accuracy! Saving model state.
|
| 53 |
+
Epoch 10/25 | Train Loss: 0.2275 | Val Loss: 0.1862 | Val Acc: 94.45%
|
| 54 |
+
-> New best validation accuracy! Saving model state.
|
| 55 |
+
Epoch 11/25 | Train Loss: 0.2092 | Val Loss: 0.1714 | Val Acc: 94.60%
|
| 56 |
+
-> New best validation accuracy! Saving model state.
|
| 57 |
+
Epoch 12/25 | Train Loss: 0.1924 | Val Loss: 0.1534 | Val Acc: 95.10%
|
| 58 |
+
-> New best validation accuracy! Saving model state.
|
| 59 |
+
Epoch 13/25 | Train Loss: 0.1847 | Val Loss: 0.1468 | Val Acc: 95.40%
|
| 60 |
+
-> New best validation accuracy! Saving model state.
|
| 61 |
+
Epoch 14/25 | Train Loss: 0.1649 | Val Loss: 0.1480 | Val Acc: 95.30%
|
| 62 |
+
Epoch 15/25 | Train Loss: 0.1587 | Val Loss: 0.1311 | Val Acc: 96.15%
|
| 63 |
+
-> New best validation accuracy! Saving model state.
|
| 64 |
+
Epoch 16/25 | Train Loss: 0.1503 | Val Loss: 0.1386 | Val Acc: 95.55%
|
| 65 |
+
Epoch 17/25 | Train Loss: 0.1433 | Val Loss: 0.1254 | Val Acc: 95.80%
|
| 66 |
+
Epoch 18/25 | Train Loss: 0.1324 | Val Loss: 0.1160 | Val Acc: 96.05%
|
| 67 |
+
Epoch 19/25 | Train Loss: 0.1285 | Val Loss: 0.1164 | Val Acc: 96.15%
|
| 68 |
+
Epoch 20/25 | Train Loss: 0.1184 | Val Loss: 0.1193 | Val Acc: 96.15%
|
| 69 |
+
Epoch 21/25 | Train Loss: 0.1145 | Val Loss: 0.1123 | Val Acc: 96.45%
|
| 70 |
+
-> New best validation accuracy! Saving model state.
|
| 71 |
+
Epoch 22/25 | Train Loss: 0.1125 | Val Loss: 0.1200 | Val Acc: 96.05%
|
| 72 |
+
Epoch 23/25 | Train Loss: 0.1053 | Val Loss: 0.1358 | Val Acc: 96.35%
|
| 73 |
+
Epoch 24/25 | Train Loss: 0.1021 | Val Loss: 0.1200 | Val Acc: 96.35%
|
| 74 |
+
Epoch 25/25 | Train Loss: 0.0964 | Val Loss: 0.1061 | Val Acc: 96.70%
|
| 75 |
+
-> New best validation accuracy! Saving model state.
|
| 76 |
+
--- Training Finished ---
|
| 77 |
+
|
| 78 |
+
--- Evaluating on Test Set (V3.0 Model) ---
|
| 79 |
+
Final Test Loss: 0.1031
|
| 80 |
+
Final Test Accuracy: 96.99%
|
| 81 |
+
-------------------------------------------
|