Upload folder using huggingface_hub
Browse files- attnserver.run_attnserver.slurm.sh.343188.out.log +642 -0
- attnserver.run_attnserver.slurm.sh.343194.err.log +0 -0
- attnserver.run_attnserver.slurm.sh.343194.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343195.out.log +323 -0
- attnserver.run_attnserver.slurm.sh.343196.err.log +281 -0
- attnserver.run_attnserver.slurm.sh.343196.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343197.err.log +263 -0
- attnserver.run_attnserver.slurm.sh.343197.out.log +0 -0
- attnserver.run_attnserver.slurm.sh.343201.out.log +186 -0
attnserver.run_attnserver.slurm.sh.343188.out.log
CHANGED
|
@@ -119674,3 +119674,645 @@ Theoretical memory footprints: weight and optimizer=1206.09 MB
|
|
| 119674 |
[Rank 59] (after 1 iterations) memory (MB) | allocated: 25522.66455078125 | max allocated: 25670.65087890625 | reserved: 29386.0 | max reserved: 29386.0[Rank 58] (after 1 iterations) memory (MB) | allocated: 25522.66455078125 | max allocated: 25670.65087890625 | reserved: 29386.0 | max reserved: 29386.0
|
| 119675 |
|
| 119676 |
[Rank 37] (after 1 iterations) memory (MB) | allocated: 25522.66455078125 | max allocated: 25670.65087890625 | reserved: 29382.0 | max reserved: 29382.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 119674 |
[Rank 59] (after 1 iterations) memory (MB) | allocated: 25522.66455078125 | max allocated: 25670.65087890625 | reserved: 29386.0 | max reserved: 29386.0[Rank 58] (after 1 iterations) memory (MB) | allocated: 25522.66455078125 | max allocated: 25670.65087890625 | reserved: 29386.0 | max reserved: 29386.0
|
| 119675 |
|
| 119676 |
[Rank 37] (after 1 iterations) memory (MB) | allocated: 25522.66455078125 | max allocated: 25670.65087890625 | reserved: 29382.0 | max reserved: 29382.0
|
| 119677 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119678 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119679 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119680 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119681 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119682 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119683 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119684 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119685 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119686 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119687 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119688 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119689 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119690 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119691 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119692 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119693 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119694 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119695 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119696 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119697 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119698 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119699 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119700 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119701 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119702 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119703 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119704 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119705 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119706 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119707 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119708 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119709 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119710 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119711 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119712 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119713 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119714 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119715 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119716 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119717 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119718 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119719 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119720 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119721 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119722 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119723 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119724 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119725 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119726 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119727 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119728 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119729 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119730 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119731 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119732 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119733 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119734 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119735 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119736 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119737 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119738 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119739 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119740 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119741 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119742 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119743 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119744 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119745 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119746 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119747 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119748 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119749 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119750 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119751 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119752 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119753 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119754 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119755 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119756 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119757 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119758 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119759 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119760 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119761 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119762 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119763 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119764 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119765 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119766 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119767 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119768 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119769 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119770 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119771 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119772 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119773 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119774 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119775 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119776 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119777 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119778 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119779 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119780 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119781 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119782 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119783 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119784 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119785 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119786 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119787 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119788 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119789 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119790 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119791 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119792 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119793 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119794 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119795 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119796 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119797 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119798 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119799 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119800 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119801 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119802 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119803 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119804 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119805 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119806 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119807 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119808 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119809 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119810 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119811 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119812 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119813 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119814 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119815 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119816 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119817 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119818 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119819 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119820 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119821 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119822 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119823 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119824 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119825 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119826 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119827 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119828 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119829 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119830 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119831 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119832 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119833 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119834 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119835 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119836 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119837 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119838 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119839 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119840 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119841 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119842 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119843 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119844 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119845 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119846 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119847 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119848 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119849 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119850 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119851 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119852 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119853 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119854 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119855 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119856 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119857 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119858 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119859 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119860 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119861 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119862 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119863 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119864 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119865 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119866 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119867 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119868 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119869 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119870 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119871 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119872 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119873 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119874 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119875 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119876 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119877 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119878 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119879 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119880 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119881 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119882 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119883 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119884 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119885 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119886 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119887 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119888 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119889 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119890 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119891 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119892 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119893 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119894 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119895 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119896 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119897 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119898 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119899 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119900 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119901 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119902 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119903 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119904 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119905 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119906 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119907 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119908 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119909 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119910 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119911 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119912 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119913 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119914 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119915 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119916 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119917 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119918 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119919 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119920 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119921 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119922 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119923 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119924 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119925 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119926 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119927 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119928 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119929 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119930 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119931 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119932 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119933 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119934 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119935 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119936 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119937 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119938 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119939 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119940 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119941 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119942 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119943 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119944 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119945 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119946 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119947 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119948 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119949 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119950 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119951 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119952 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119953 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119954 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119955 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119956 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119957 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119958 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119959 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119960 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119961 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119962 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119963 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119964 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119965 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119966 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119967 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119968 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119969 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119970 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119971 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119972 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119973 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119974 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119975 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119976 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119977 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119978 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119979 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119980 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119981 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119982 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119983 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119984 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119985 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119986 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119987 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119988 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119989 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 119990 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 119991 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 119992 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 119993 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 119994 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 119995 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 119996 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 119997 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 119998 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 119999 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120000 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120001 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120002 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120003 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120004 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120005 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120006 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120007 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120008 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120009 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120010 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120011 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120012 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120013 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120014 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120015 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120016 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120017 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120018 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120019 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120020 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120021 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120022 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120023 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120024 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120025 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120026 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120027 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120028 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120029 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120030 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120031 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120032 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120033 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120034 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120035 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120036 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120037 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120038 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120039 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120040 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120041 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120042 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120043 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120044 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120045 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120046 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120047 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120048 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120049 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120050 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120051 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120052 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120053 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120054 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120055 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120056 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120057 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120058 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120059 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120060 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120061 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120062 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120063 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120064 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120065 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120066 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120067 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120068 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120069 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120070 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120071 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120072 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120073 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120074 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120075 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120076 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120077 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120078 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120079 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120080 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120081 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120082 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120083 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120084 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120085 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120086 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120087 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120088 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120089 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120090 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120091 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120092 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120093 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120094 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120095 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120096 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120097 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120098 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120099 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120100 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120101 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120102 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120103 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120104 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120105 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120106 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120107 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120108 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120109 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120110 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120111 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120112 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120113 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120114 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120115 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120116 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120117 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120118 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120119 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120120 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120121 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120122 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120123 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120124 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120125 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120126 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120127 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120128 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120129 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120130 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120131 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120132 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120133 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120134 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120135 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120136 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120137 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120138 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120139 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120140 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120141 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120142 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120143 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120144 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120145 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120146 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120147 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120148 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120149 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120150 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120151 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120152 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120153 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120154 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120155 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120156 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120157 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120158 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120159 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120160 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120161 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120162 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120163 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120164 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120165 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120166 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120167 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120168 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120169 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120170 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120171 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120172 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120173 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120174 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120175 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120176 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120177 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120178 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120179 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120180 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120181 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120182 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120183 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120184 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120185 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120186 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120187 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120188 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120189 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120190 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120191 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120192 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120193 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120194 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120195 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120196 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120197 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120198 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120199 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120200 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120201 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120202 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120203 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120204 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120205 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120206 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120207 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120208 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120209 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120210 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120211 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120212 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120213 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120214 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120215 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120216 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120217 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120218 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120219 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120220 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120221 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120222 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120223 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120224 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120225 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120226 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120227 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120228 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120229 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120230 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120231 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120232 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120233 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120234 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120235 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120236 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120237 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120238 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120239 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120240 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120241 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120242 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120243 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120244 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120245 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120246 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120247 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120248 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120249 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120250 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120251 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120252 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120253 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120254 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120255 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120256 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120257 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120258 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120259 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120260 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120261 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120262 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120263 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120264 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120265 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120266 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120267 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120268 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120269 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120270 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120271 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120272 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120273 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120274 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120275 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120276 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120277 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120278 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120279 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120280 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120281 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120282 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120283 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120284 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120285 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120286 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120287 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120288 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120289 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120290 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120291 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120292 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120293 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120294 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120295 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120296 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120297 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120298 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120299 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120300 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120301 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120302 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120303 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120304 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120305 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120306 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120307 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 120308 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 120309 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 120310 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 120311 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 120312 |
+
batch tensor after cp: tokens torch.Size([1, 16384])
|
| 120313 |
+
batch tensor after cp: labels torch.Size([1, 16384])
|
| 120314 |
+
batch tensor after cp: loss_mask torch.Size([1, 16384])
|
| 120315 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 16384, 131072])
|
| 120316 |
+
batch tensor after cp: position_ids torch.Size([1, 16384])
|
| 120317 |
+
Start exporting trace 1
|
| 120318 |
+
Done exporting trace 1
|
attnserver.run_attnserver.slurm.sh.343194.err.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343194.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343195.out.log
CHANGED
|
@@ -63261,3 +63261,326 @@ batch tensor after cp: labels torch.Size([1, 24576])
|
|
| 63261 |
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63262 |
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63263 |
batch tensor after cp: position_ids torch.Size([1, 24576])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63261 |
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63262 |
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63263 |
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63264 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63265 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63266 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63267 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63268 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63269 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63270 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63271 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63272 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63273 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63274 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63275 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63276 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63277 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63278 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63279 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63280 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63281 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63282 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63283 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63284 |
+
Start exporting trace 5
|
| 63285 |
+
Done exporting trace 5
|
| 63286 |
+
[2025-06-21 20:54:04] iteration 6/ 10 | consumed samples: 6 | elapsed time per iteration (ms): 82380.2 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 134217728.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 63287 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63288 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63289 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63290 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63291 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63292 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63293 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63294 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63295 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63296 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63297 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63298 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63299 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63300 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63301 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63302 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63303 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63304 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63305 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63306 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63307 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63308 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63309 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63310 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63311 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63312 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63313 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63314 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63315 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63316 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63317 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63318 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63319 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63320 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63321 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63322 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63323 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63324 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63325 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63326 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63327 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63328 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63329 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63330 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63331 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63332 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63333 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63334 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63335 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63336 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63337 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63338 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63339 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63340 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63341 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63342 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63343 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63344 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63345 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63346 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63347 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63348 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63349 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63350 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63351 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63352 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63353 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63354 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63355 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63356 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63357 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63358 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63359 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63360 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63361 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63362 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63363 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63364 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63365 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63366 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63367 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63368 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63369 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63370 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63371 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63372 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63373 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63374 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63375 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63376 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63377 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63378 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63379 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63380 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63381 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63382 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63383 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63384 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63385 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63386 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63387 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63388 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63389 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63390 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63391 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63392 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63393 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63394 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63395 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63396 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63397 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63398 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63399 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63400 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63401 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63402 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63403 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63404 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63405 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63406 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63407 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63408 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63409 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63410 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63411 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63412 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63413 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63414 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63415 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63416 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63417 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63418 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63419 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63420 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63421 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63422 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63423 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63424 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63425 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63426 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63427 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63428 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63429 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63430 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63431 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63432 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63433 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63434 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63435 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63436 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63437 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63438 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63439 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63440 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63441 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63442 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63443 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63444 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63445 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63446 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63447 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63448 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63449 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63450 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63451 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63452 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63453 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63454 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63455 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63456 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63457 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63458 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63459 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63460 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63461 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63462 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63463 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63464 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63465 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63466 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63467 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63468 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63469 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63470 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63471 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63472 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63473 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63474 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63475 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63476 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63477 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63478 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63479 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63480 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63481 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63482 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63483 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63484 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63485 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63486 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63487 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63488 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63489 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63490 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63491 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63492 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63493 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63494 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63495 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63496 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63497 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63498 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63499 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63500 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63501 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63502 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63503 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63504 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63505 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63506 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63507 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63508 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63509 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63510 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63511 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63512 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63513 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63514 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63515 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63516 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63517 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63518 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63519 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63520 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63521 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63522 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63523 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63524 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63525 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63526 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63527 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63528 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63529 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63530 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63531 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63532 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63533 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63534 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63535 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63536 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63537 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63538 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63539 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63540 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63541 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63542 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63543 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63544 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63545 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63546 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63547 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63548 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63549 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63550 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63551 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63552 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63553 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63554 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63555 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63556 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63557 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63558 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63559 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63560 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63561 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63562 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63563 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63564 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63565 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63566 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63567 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63568 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63569 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63570 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63571 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63572 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63573 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63574 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63575 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63576 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
| 63577 |
+
batch tensor: tokens torch.Size([1, 98304])
|
| 63578 |
+
batch tensor: labels torch.Size([1, 98304])
|
| 63579 |
+
batch tensor: loss_mask torch.Size([1, 98304])
|
| 63580 |
+
batch tensor: attention_mask torch.Size([1, 1, 98304, 98304])
|
| 63581 |
+
batch tensor: position_ids torch.Size([1, 98304])
|
| 63582 |
+
batch tensor after cp: tokens torch.Size([1, 24576])
|
| 63583 |
+
batch tensor after cp: labels torch.Size([1, 24576])
|
| 63584 |
+
batch tensor after cp: loss_mask torch.Size([1, 24576])
|
| 63585 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 24576, 98304])
|
| 63586 |
+
batch tensor after cp: position_ids torch.Size([1, 24576])
|
attnserver.run_attnserver.slurm.sh.343196.err.log
CHANGED
|
@@ -184,3 +184,284 @@ W0621 20:53:01.122000 1480440 site-packages/torch/distributed/run.py:766]
|
|
| 184 |
W0621 20:53:01.122000 1480440 site-packages/torch/distributed/run.py:766] *****************************************
|
| 185 |
W0621 20:53:01.122000 1480440 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 186 |
W0621 20:53:01.122000 1480440 site-packages/torch/distributed/run.py:766] *****************************************
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
W0621 20:53:01.122000 1480440 site-packages/torch/distributed/run.py:766] *****************************************
|
| 185 |
W0621 20:53:01.122000 1480440 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 186 |
W0621 20:53:01.122000 1480440 site-packages/torch/distributed/run.py:766] *****************************************
|
| 187 |
+
[rank16]:[W621 20:53:32.279096536 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 16] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 188 |
+
[rank24]:[W621 20:53:32.363909187 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 24] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 189 |
+
[rank0]:[W621 20:53:32.051519772 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 190 |
+
[rank5]:[W621 20:53:32.436188333 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 191 |
+
[rank29]:[W621 20:53:32.819664263 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 29] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 192 |
+
[rank21]:[W621 20:53:32.743476734 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 21] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 193 |
+
[rank13]:[W621 20:53:32.654256940 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 194 |
+
[rank2]:[W621 20:53:32.449061221 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 195 |
+
[rank22]:[W621 20:53:32.756792811 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 22] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 196 |
+
[rank18]:[W621 20:53:32.756862251 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 18] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 197 |
+
[rank6]:[W621 20:53:32.451596822 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 198 |
+
[rank26]:[W621 20:53:32.837211917 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 26] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 199 |
+
[rank14]:[W621 20:53:32.669149161 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 200 |
+
[rank10]:[W621 20:53:32.669192753 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 201 |
+
[rank30]:[W621 20:53:32.838954953 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 30] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 202 |
+
[rank15]:[W621 20:53:32.675110418 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 203 |
+
[rank7]:[W621 20:53:32.463604525 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 204 |
+
[rank31]:[W621 20:53:32.846247510 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 31] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 205 |
+
[rank23]:[W621 20:53:32.770166045 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 23] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 206 |
+
[rank11]:[W621 20:53:32.682046853 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 207 |
+
[rank9]:[W621 20:53:32.682127974 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 208 |
+
[rank27]:[W621 20:53:32.851364172 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 27] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 209 |
+
[rank25]:[W621 20:53:32.851527033 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 25] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 210 |
+
[rank1]:[W621 20:53:32.468677631 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 211 |
+
[rank3]:[W621 20:53:32.468693698 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 212 |
+
[rank17]:[W621 20:53:32.775797003 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 17] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 213 |
+
[rank19]:[W621 20:53:32.775852770 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 19] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 214 |
+
[rank8]:[W621 20:53:32.701045847 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 215 |
+
[rank12]:[W621 20:53:32.708214824 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 216 |
+
[rank28]:[W621 20:53:32.877280260 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 28] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 217 |
+
[rank4]:[W621 20:53:32.499692504 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 218 |
+
[rank20]:[W621 20:53:32.807377963 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 20] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 219 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 220 |
+
warnings.warn(
|
| 221 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 222 |
+
warnings.warn(
|
| 223 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 224 |
+
warnings.warn(
|
| 225 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 226 |
+
warnings.warn(
|
| 227 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 228 |
+
warnings.warn(
|
| 229 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 230 |
+
warnings.warn(
|
| 231 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 232 |
+
warnings.warn(
|
| 233 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 234 |
+
warnings.warn(
|
| 235 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 236 |
+
warnings.warn(
|
| 237 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 238 |
+
warnings.warn(
|
| 239 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 240 |
+
warnings.warn(
|
| 241 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 242 |
+
warnings.warn(
|
| 243 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 244 |
+
warnings.warn(
|
| 245 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 246 |
+
warnings.warn(
|
| 247 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 248 |
+
warnings.warn(
|
| 249 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 250 |
+
warnings.warn(
|
| 251 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 252 |
+
warnings.warn(
|
| 253 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 254 |
+
warnings.warn(
|
| 255 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 256 |
+
warnings.warn(
|
| 257 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 258 |
+
warnings.warn(
|
| 259 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 260 |
+
warnings.warn(
|
| 261 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 262 |
+
warnings.warn(
|
| 263 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 264 |
+
warnings.warn(
|
| 265 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 266 |
+
warnings.warn(
|
| 267 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 268 |
+
warnings.warn(
|
| 269 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 270 |
+
warnings.warn(
|
| 271 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 272 |
+
warnings.warn(
|
| 273 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 274 |
+
warnings.warn(
|
| 275 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 276 |
+
warnings.warn(
|
| 277 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 278 |
+
warnings.warn(
|
| 279 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 280 |
+
warnings.warn(
|
| 281 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 282 |
+
warnings.warn(
|
| 283 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 284 |
+
warnings.warn(
|
| 285 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 286 |
+
warnings.warn(
|
| 287 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 288 |
+
warnings.warn(
|
| 289 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 290 |
+
warnings.warn(
|
| 291 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 292 |
+
warnings.warn(
|
| 293 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 294 |
+
warnings.warn(
|
| 295 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 296 |
+
warnings.warn(
|
| 297 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 298 |
+
warnings.warn(
|
| 299 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 300 |
+
warnings.warn(
|
| 301 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 302 |
+
warnings.warn(
|
| 303 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 304 |
+
warnings.warn(
|
| 305 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 306 |
+
warnings.warn(
|
| 307 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 308 |
+
warnings.warn(
|
| 309 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 310 |
+
warnings.warn(
|
| 311 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 312 |
+
warnings.warn(
|
| 313 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 314 |
+
warnings.warn(
|
| 315 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 316 |
+
warnings.warn(
|
| 317 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 318 |
+
warnings.warn(
|
| 319 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 320 |
+
warnings.warn(
|
| 321 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 322 |
+
warnings.warn(
|
| 323 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 324 |
+
warnings.warn(
|
| 325 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 326 |
+
warnings.warn(
|
| 327 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 328 |
+
warnings.warn(
|
| 329 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 330 |
+
warnings.warn(
|
| 331 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 332 |
+
warnings.warn(
|
| 333 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 334 |
+
warnings.warn(
|
| 335 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 336 |
+
warnings.warn(
|
| 337 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 338 |
+
warnings.warn(
|
| 339 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 340 |
+
warnings.warn(
|
| 341 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 342 |
+
warnings.warn(
|
| 343 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 344 |
+
warnings.warn(
|
| 345 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 346 |
+
warnings.warn(
|
| 347 |
+
[rank7]:[W621 20:54:05.852469863 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 348 |
+
[rank6]:[W621 20:54:05.957029135 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 349 |
+
[rank5]:[W621 20:54:05.012904183 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 350 |
+
[rank2]:[W621 20:54:05.023179044 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 351 |
+
[rank3]:[W621 20:54:05.095949824 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 352 |
+
[rank1]:[W621 20:54:05.156542055 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 353 |
+
[rank4]:[W621 20:54:05.261689988 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 354 |
+
[rank0]:[W621 20:54:05.381962495 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 355 |
+
[rank28]:[W621 20:54:05.931318464 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 356 |
+
[rank14]:[W621 20:54:05.779361497 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 357 |
+
[rank31]:[W621 20:54:05.985568953 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 358 |
+
[rank23]:[W621 20:54:05.928259412 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 359 |
+
[rank10]:[W621 20:54:05.855658903 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 360 |
+
[rank30]:[W621 20:54:05.059286001 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 361 |
+
[rank25]:[W621 20:54:05.070427109 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 362 |
+
[rank27]:[W621 20:54:05.071332041 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 363 |
+
[rank15]:[W621 20:54:05.927669030 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 364 |
+
[rank22]:[W621 20:54:05.022212298 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 365 |
+
[rank11]:[W621 20:54:05.932691566 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 366 |
+
[rank9]:[W621 20:54:05.941872807 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 367 |
+
[rank13]:[W621 20:54:05.942988962 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 368 |
+
[rank20]:[W621 20:54:05.035175231 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 369 |
+
[rank29]:[W621 20:54:05.122233118 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 370 |
+
[rank24]:[W621 20:54:05.132815528 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 371 |
+
[rank12]:[W621 20:54:05.964062011 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 372 |
+
[rank17]:[W621 20:54:05.073793095 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 373 |
+
[rank26]:[W621 20:54:05.175702868 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 374 |
+
[rank16]:[W621 20:54:06.105576035 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 375 |
+
[rank18]:[W621 20:54:06.128356475 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 376 |
+
[rank19]:[W621 20:54:06.134867285 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 377 |
+
[rank8]:[W621 20:54:06.094994586 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 378 |
+
[rank21]:[W621 20:54:06.316765263 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 379 |
+
+ set +x
|
| 380 |
+
+ set +x
|
| 381 |
+
+ set +x
|
| 382 |
+
+ set +x
|
| 383 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 384 |
+
+ export PROF_CTX_LENGTH=2048
|
| 385 |
+
+ PROF_CTX_LENGTH=2048
|
| 386 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp8.cp4.bs2.json'
|
| 387 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp8.cp4.bs2.json' ']'
|
| 388 |
+
+ echo 'Running ctx_length=2048, TP_SIZE=8, CP_SIZE=4, BATCH_SIZE=2'
|
| 389 |
+
+ srun bash ./attnserver.sh
|
| 390 |
+
+ which python3
|
| 391 |
+
+ which python3
|
| 392 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 1 --rdzv_id 343196 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-184:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 393 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 3 --rdzv_id 343196 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-184:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 394 |
+
+ which python3
|
| 395 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 2 --rdzv_id 343196 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-184:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 396 |
+
+ which python3
|
| 397 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 0 --rdzv_id 343196 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-184:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 398 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 399 |
+
and will be removed in future. Use torchrun.
|
| 400 |
+
Note that --use-env is set by default in torchrun.
|
| 401 |
+
If your script expects `--local-rank` argument to be set, please
|
| 402 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 403 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 404 |
+
further instructions
|
| 405 |
+
|
| 406 |
+
main()
|
| 407 |
+
W0621 20:54:12.173000 1956323 site-packages/torch/distributed/run.py:766]
|
| 408 |
+
W0621 20:54:12.173000 1956323 site-packages/torch/distributed/run.py:766] *****************************************
|
| 409 |
+
W0621 20:54:12.173000 1956323 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 410 |
+
W0621 20:54:12.173000 1956323 site-packages/torch/distributed/run.py:766] *****************************************
|
| 411 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 412 |
+
and will be removed in future. Use torchrun.
|
| 413 |
+
Note that --use-env is set by default in torchrun.
|
| 414 |
+
If your script expects `--local-rank` argument to be set, please
|
| 415 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 416 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 417 |
+
further instructions
|
| 418 |
+
|
| 419 |
+
main()
|
| 420 |
+
W0621 20:54:12.274000 1139880 site-packages/torch/distributed/run.py:766]
|
| 421 |
+
W0621 20:54:12.274000 1139880 site-packages/torch/distributed/run.py:766] *****************************************
|
| 422 |
+
W0621 20:54:12.274000 1139880 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 423 |
+
W0621 20:54:12.274000 1139880 site-packages/torch/distributed/run.py:766] *****************************************
|
| 424 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 425 |
+
and will be removed in future. Use torchrun.
|
| 426 |
+
Note that --use-env is set by default in torchrun.
|
| 427 |
+
If your script expects `--local-rank` argument to be set, please
|
| 428 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 429 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 430 |
+
further instructions
|
| 431 |
+
|
| 432 |
+
main()
|
| 433 |
+
W0621 20:54:12.359000 1484017 site-packages/torch/distributed/run.py:766]
|
| 434 |
+
W0621 20:54:12.359000 1484017 site-packages/torch/distributed/run.py:766] *****************************************
|
| 435 |
+
W0621 20:54:12.359000 1484017 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 436 |
+
W0621 20:54:12.359000 1484017 site-packages/torch/distributed/run.py:766] *****************************************
|
| 437 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 438 |
+
and will be removed in future. Use torchrun.
|
| 439 |
+
Note that --use-env is set by default in torchrun.
|
| 440 |
+
If your script expects `--local-rank` argument to be set, please
|
| 441 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 442 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 443 |
+
further instructions
|
| 444 |
+
|
| 445 |
+
main()
|
| 446 |
+
W0621 20:54:12.445000 1281491 site-packages/torch/distributed/run.py:766]
|
| 447 |
+
W0621 20:54:12.445000 1281491 site-packages/torch/distributed/run.py:766] *****************************************
|
| 448 |
+
W0621 20:54:12.445000 1281491 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 449 |
+
W0621 20:54:12.445000 1281491 site-packages/torch/distributed/run.py:766] *****************************************
|
| 450 |
+
[rank8]:[W621 20:54:38.038379846 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 451 |
+
[rank16]:[W621 20:54:38.135510632 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 16] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 452 |
+
[rank0]:[W621 20:54:38.871244986 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 453 |
+
[rank24]:[W621 20:54:38.537586267 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 24] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 454 |
+
[rank5]:[W621 20:54:38.181728316 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 455 |
+
[rank7]:[W621 20:54:38.181922829 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 456 |
+
[rank2]:[W621 20:54:38.181931209 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 457 |
+
[rank4]:[W621 20:54:38.182004179 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 458 |
+
[rank18]:[W621 20:54:38.488096925 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 18] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 459 |
+
[rank23]:[W621 20:54:38.488106584 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 23] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 460 |
+
[rank21]:[W621 20:54:38.488118308 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 21] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 461 |
+
[rank26]:[W621 20:54:38.565763990 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 26] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 462 |
+
[rank20]:[W621 20:54:38.488286233 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 20] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 463 |
+
[rank28]:[W621 20:54:38.565817999 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 28] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 464 |
+
[rank6]:[W621 20:54:38.183551244 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 465 |
+
[rank10]:[W621 20:54:38.397740736 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 466 |
+
[rank15]:[W621 20:54:38.397740704 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 467 |
+
[rank29]:[W621 20:54:38.567030553 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 29] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
attnserver.run_attnserver.slurm.sh.343196.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343197.err.log
CHANGED
|
@@ -184,3 +184,266 @@ W0621 20:53:05.653000 3302834 site-packages/torch/distributed/run.py:766]
|
|
| 184 |
W0621 20:53:05.653000 3302834 site-packages/torch/distributed/run.py:766] *****************************************
|
| 185 |
W0621 20:53:05.653000 3302834 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 186 |
W0621 20:53:05.653000 3302834 site-packages/torch/distributed/run.py:766] *****************************************
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 184 |
W0621 20:53:05.653000 3302834 site-packages/torch/distributed/run.py:766] *****************************************
|
| 185 |
W0621 20:53:05.653000 3302834 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 186 |
W0621 20:53:05.653000 3302834 site-packages/torch/distributed/run.py:766] *****************************************
|
| 187 |
+
[rank0]:[W621 20:53:36.476389910 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 0] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 188 |
+
[rank8]:[W621 20:53:36.397410647 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 8] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 189 |
+
[rank16]:[W621 20:53:36.909565270 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 16] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 190 |
+
[rank7]:[W621 20:53:37.214991830 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 7] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 191 |
+
[rank15]:[W621 20:53:37.103476459 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 15] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 192 |
+
[rank23]:[W621 20:53:37.556635144 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 23] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 193 |
+
[rank31]:[W621 20:53:37.626428569 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 31] using GPU 7 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 194 |
+
[rank24]:[W621 20:53:37.689221072 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 24] using GPU 0 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 195 |
+
[rank4]:[W621 20:53:37.287112172 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 4] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 196 |
+
[rank2]:[W621 20:53:37.287117402 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 2] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 197 |
+
[rank26]:[W621 20:53:37.696026079 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 26] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 198 |
+
[rank20]:[W621 20:53:37.628797877 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 20] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 199 |
+
[rank12]:[W621 20:53:37.176861429 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 12] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 200 |
+
[rank18]:[W621 20:53:37.633084211 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 18] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 201 |
+
[rank3]:[W621 20:53:37.294993219 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 3] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 202 |
+
[rank10]:[W621 20:53:37.181751478 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 10] using GPU 2 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 203 |
+
[rank5]:[W621 20:53:37.295509111 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 5] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 204 |
+
[rank29]:[W621 20:53:37.704865500 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 29] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 205 |
+
[rank27]:[W621 20:53:37.704869504 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 27] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 206 |
+
[rank28]:[W621 20:53:37.705096442 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 28] using GPU 4 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 207 |
+
[rank11]:[W621 20:53:37.191090393 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 11] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 208 |
+
[rank19]:[W621 20:53:37.645800975 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 19] using GPU 3 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 209 |
+
[rank21]:[W621 20:53:37.646140458 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 21] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 210 |
+
[rank1]:[W621 20:53:37.309853849 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 1] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 211 |
+
[rank9]:[W621 20:53:37.197524871 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 9] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 212 |
+
[rank17]:[W621 20:53:37.647656849 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 17] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 213 |
+
[rank6]:[W621 20:53:37.317487619 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 6] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 214 |
+
[rank13]:[W621 20:53:37.200277507 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 13] using GPU 5 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 215 |
+
[rank25]:[W621 20:53:37.720994991 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 25] using GPU 1 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 216 |
+
[rank22]:[W621 20:53:37.664677490 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 22] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 217 |
+
[rank14]:[W621 20:53:37.208867094 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 14] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 218 |
+
[rank30]:[W621 20:53:37.725932637 ProcessGroupNCCL.cpp:4715] [PG ID 0 PG GUID 0 Rank 30] using GPU 6 as device used by this process is currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. You can pecify device_id in init_process_group() to force use of a particular device.
|
| 219 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 220 |
+
warnings.warn(
|
| 221 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 222 |
+
warnings.warn(
|
| 223 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 224 |
+
warnings.warn(
|
| 225 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 226 |
+
warnings.warn(
|
| 227 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 228 |
+
warnings.warn(
|
| 229 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 230 |
+
warnings.warn(
|
| 231 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 232 |
+
warnings.warn(
|
| 233 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 234 |
+
warnings.warn(
|
| 235 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 236 |
+
warnings.warn(
|
| 237 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 238 |
+
warnings.warn(
|
| 239 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 240 |
+
warnings.warn(
|
| 241 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 242 |
+
warnings.warn(
|
| 243 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 244 |
+
warnings.warn(
|
| 245 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 246 |
+
warnings.warn(
|
| 247 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 248 |
+
warnings.warn(
|
| 249 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 250 |
+
warnings.warn(
|
| 251 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 252 |
+
warnings.warn(
|
| 253 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 254 |
+
warnings.warn(
|
| 255 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 256 |
+
warnings.warn(
|
| 257 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 258 |
+
warnings.warn(
|
| 259 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 260 |
+
warnings.warn(
|
| 261 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 262 |
+
warnings.warn(
|
| 263 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 264 |
+
warnings.warn(
|
| 265 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 266 |
+
warnings.warn(
|
| 267 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 268 |
+
warnings.warn(
|
| 269 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 270 |
+
warnings.warn(
|
| 271 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 272 |
+
warnings.warn(
|
| 273 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 274 |
+
warnings.warn(
|
| 275 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 276 |
+
warnings.warn(
|
| 277 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 278 |
+
warnings.warn(
|
| 279 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 280 |
+
warnings.warn(
|
| 281 |
+
/mnt/weka/home/hao.zhang/junda/attnserver-megatron/megatron/core/models/gpt/gpt_layer_specs.py:94: UserWarning: The fp8 argument in "get_gpt_layer_with_transformer_engine_spec" has been deprecated and will be removed soon. Please update your code accordingly.
|
| 282 |
+
warnings.warn(
|
| 283 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 284 |
+
warnings.warn(
|
| 285 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 286 |
+
warnings.warn(
|
| 287 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 288 |
+
warnings.warn(
|
| 289 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 290 |
+
warnings.warn(
|
| 291 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 292 |
+
warnings.warn(
|
| 293 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 294 |
+
warnings.warn(
|
| 295 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 296 |
+
warnings.warn(
|
| 297 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 298 |
+
warnings.warn(
|
| 299 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 300 |
+
warnings.warn(
|
| 301 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 302 |
+
warnings.warn(
|
| 303 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 304 |
+
warnings.warn(
|
| 305 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 306 |
+
warnings.warn(
|
| 307 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 308 |
+
warnings.warn(
|
| 309 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 310 |
+
warnings.warn(
|
| 311 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 312 |
+
warnings.warn(
|
| 313 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 314 |
+
warnings.warn(
|
| 315 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 316 |
+
warnings.warn(
|
| 317 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 318 |
+
warnings.warn(
|
| 319 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 320 |
+
warnings.warn(
|
| 321 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 322 |
+
warnings.warn(
|
| 323 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 324 |
+
warnings.warn(
|
| 325 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 326 |
+
warnings.warn(
|
| 327 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 328 |
+
warnings.warn(
|
| 329 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 330 |
+
warnings.warn(
|
| 331 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 332 |
+
warnings.warn(
|
| 333 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 334 |
+
warnings.warn(
|
| 335 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 336 |
+
warnings.warn(
|
| 337 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 338 |
+
warnings.warn(
|
| 339 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 340 |
+
warnings.warn(
|
| 341 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 342 |
+
warnings.warn(
|
| 343 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 344 |
+
warnings.warn(
|
| 345 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/transformer_engine/pytorch/cpu_offload.py:595: DeprecationWarning: Offloading weights is deprecated. Using offload_weights=True does not have any effect.
|
| 346 |
+
warnings.warn(
|
| 347 |
+
[rank4]:[W621 20:54:13.424066979 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 348 |
+
[rank5]:[W621 20:54:13.521024731 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 349 |
+
[rank7]:[W621 20:54:13.539679935 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 350 |
+
[rank6]:[W621 20:54:13.548186503 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 351 |
+
[rank2]:[W621 20:54:13.552759866 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 352 |
+
[rank3]:[W621 20:54:13.612504683 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 353 |
+
[rank1]:[W621 20:54:13.686343546 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 354 |
+
[rank0]:[W621 20:54:13.721086784 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 355 |
+
[rank9]:[W621 20:54:13.791768962 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 356 |
+
[rank25]:[W621 20:54:14.454751915 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 357 |
+
[rank13]:[W621 20:54:14.967778716 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 358 |
+
[rank8]:[W621 20:54:14.979701392 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 359 |
+
[rank27]:[W621 20:54:14.507683165 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 360 |
+
[rank31]:[W621 20:54:14.523350337 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 361 |
+
[rank26]:[W621 20:54:14.526671273 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 362 |
+
[rank28]:[W621 20:54:14.528885182 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 363 |
+
[rank23]:[W621 20:54:14.475122137 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 364 |
+
[rank20]:[W621 20:54:14.510825805 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 365 |
+
[rank24]:[W621 20:54:14.632187007 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 366 |
+
[rank16]:[W621 20:54:14.573884922 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 367 |
+
[rank17]:[W621 20:54:14.628522551 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 368 |
+
[rank30]:[W621 20:54:14.716619743 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 369 |
+
[rank29]:[W621 20:54:14.721612508 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 370 |
+
[rank15]:[W621 20:54:14.213641707 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 371 |
+
[rank19]:[W621 20:54:14.671145596 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 372 |
+
[rank21]:[W621 20:54:14.683378062 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 373 |
+
[rank18]:[W621 20:54:14.710130671 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 374 |
+
[rank10]:[W621 20:54:14.267877352 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 375 |
+
[rank22]:[W621 20:54:14.749166904 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 376 |
+
[rank12]:[W621 20:54:14.299484416 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 377 |
+
[rank14]:[W621 20:54:14.306472709 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 378 |
+
[rank11]:[W621 20:54:14.308930759 ProcessGroupNCCL.cpp:1476] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
|
| 379 |
+
+ set +x
|
| 380 |
+
+ set +x
|
| 381 |
+
+ set +x
|
| 382 |
+
+ set +x
|
| 383 |
+
+ for ctx_length in 1024 2048 4096 8192 12288 16384 24576 32768 40960 49152 65536 81920 98304 131072
|
| 384 |
+
+ export PROF_CTX_LENGTH=2048
|
| 385 |
+
+ PROF_CTX_LENGTH=2048
|
| 386 |
+
+ name='/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp8.cp4.bs4.json'
|
| 387 |
+
+ '[' -f '/mnt/sharefs/users/hao.zhang/junda/megatron-prof-data--unstable-v5/mytrace.L2048*tp8.cp4.bs4.json' ']'
|
| 388 |
+
+ echo 'Running ctx_length=2048, TP_SIZE=8, CP_SIZE=4, BATCH_SIZE=4'
|
| 389 |
+
+ srun bash ./attnserver.sh
|
| 390 |
+
+ which python3
|
| 391 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 0 --rdzv_id 343197 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-852:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 392 |
+
+ which python3
|
| 393 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 2 --rdzv_id 343197 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-852:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 394 |
+
+ which python3
|
| 395 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 3 --rdzv_id 343197 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-852:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 396 |
+
+ which python3
|
| 397 |
+
+ python3 -m torch.distributed.launch --nproc_per_node 8 --nnodes 4 --node_rank 1 --rdzv_id 343197 --rdzv_backend c10d --rdzv_endpoint fs-mbz-gpu-852:29500 ./pretrain_gpt_profile.py --tensor-model-parallel-size 8 --context-parallel-size 4 --num-layers 2 --hidden-size 4096 --num-attention-heads 64 --group-query-attention --num-query-groups 16 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 1 --global-batch-size 1 --lr 0.0005 --train-iters 10 --lr-decay-iters 150000 --lr-decay-style cosine --lr-warmup-iters 2 --weight-decay .1 --adam-beta2 .999 --fp16 --log-interval 1 --save-interval 16 --eval-interval 16 --eval-iters 1 --vocab-file vocab.json --merge-file merges.txt --save gpt-checkpoint --load gpt-checkpoint --logging-level 0 --mock-data --tensorboard-dir tensorboard-logs/
|
| 398 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 399 |
+
and will be removed in future. Use torchrun.
|
| 400 |
+
Note that --use-env is set by default in torchrun.
|
| 401 |
+
If your script expects `--local-rank` argument to be set, please
|
| 402 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 403 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 404 |
+
further instructions
|
| 405 |
+
|
| 406 |
+
main()
|
| 407 |
+
W0621 20:54:20.099000 3375010 site-packages/torch/distributed/run.py:766]
|
| 408 |
+
W0621 20:54:20.099000 3375010 site-packages/torch/distributed/run.py:766] *****************************************
|
| 409 |
+
W0621 20:54:20.099000 3375010 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 410 |
+
W0621 20:54:20.099000 3375010 site-packages/torch/distributed/run.py:766] *****************************************
|
| 411 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 412 |
+
and will be removed in future. Use torchrun.
|
| 413 |
+
Note that --use-env is set by default in torchrun.
|
| 414 |
+
If your script expects `--local-rank` argument to be set, please
|
| 415 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 416 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 417 |
+
further instructions
|
| 418 |
+
|
| 419 |
+
main()
|
| 420 |
+
W0621 20:54:20.165000 3306626 site-packages/torch/distributed/run.py:766]
|
| 421 |
+
W0621 20:54:20.165000 3306626 site-packages/torch/distributed/run.py:766] *****************************************
|
| 422 |
+
W0621 20:54:20.165000 3306626 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 423 |
+
W0621 20:54:20.165000 3306626 site-packages/torch/distributed/run.py:766] *****************************************
|
| 424 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 425 |
+
and will be removed in future. Use torchrun.
|
| 426 |
+
Note that --use-env is set by default in torchrun.
|
| 427 |
+
If your script expects `--local-rank` argument to be set, please
|
| 428 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 429 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 430 |
+
further instructions
|
| 431 |
+
|
| 432 |
+
main()
|
| 433 |
+
W0621 20:54:20.250000 87332 site-packages/torch/distributed/run.py:766]
|
| 434 |
+
W0621 20:54:20.250000 87332 site-packages/torch/distributed/run.py:766] *****************************************
|
| 435 |
+
W0621 20:54:20.250000 87332 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 436 |
+
W0621 20:54:20.250000 87332 site-packages/torch/distributed/run.py:766] *****************************************
|
| 437 |
+
/mnt/weka/home/hao.zhang/conda/miniconda/envs/junda-attnserver/lib/python3.12/site-packages/torch/distributed/launch.py:207: FutureWarning: The module torch.distributed.launch is deprecated
|
| 438 |
+
and will be removed in future. Use torchrun.
|
| 439 |
+
Note that --use-env is set by default in torchrun.
|
| 440 |
+
If your script expects `--local-rank` argument to be set, please
|
| 441 |
+
change it to read from `os.environ['LOCAL_RANK']` instead. See
|
| 442 |
+
https://pytorch.org/docs/stable/distributed.html#launch-utility for
|
| 443 |
+
further instructions
|
| 444 |
+
|
| 445 |
+
main()
|
| 446 |
+
W0621 20:54:20.309000 2008025 site-packages/torch/distributed/run.py:766]
|
| 447 |
+
W0621 20:54:20.309000 2008025 site-packages/torch/distributed/run.py:766] *****************************************
|
| 448 |
+
W0621 20:54:20.309000 2008025 site-packages/torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
| 449 |
+
W0621 20:54:20.309000 2008025 site-packages/torch/distributed/run.py:766] *****************************************
|
attnserver.run_attnserver.slurm.sh.343197.out.log
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
attnserver.run_attnserver.slurm.sh.343201.out.log
CHANGED
|
@@ -37843,3 +37843,189 @@ batch tensor after cp: labels torch.Size([1, 65536])
|
|
| 37843 |
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37844 |
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37845 |
batch tensor after cp: position_ids torch.Size([1, 65536])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37843 |
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37844 |
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37845 |
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37846 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37847 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37848 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37849 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37850 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37851 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37852 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37853 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37854 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37855 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37856 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37857 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37858 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37859 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37860 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37861 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37862 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37863 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37864 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37865 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37866 |
+
Start exporting trace 7
|
| 37867 |
+
Done exporting trace 7
|
| 37868 |
+
[2025-06-21 20:53:31] iteration 8/ 10 | consumed samples: 8 | elapsed time per iteration (ms): 40195.5 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 33554432.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|
| 37869 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37870 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37871 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37872 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37873 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37874 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37875 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37876 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37877 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37878 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37879 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37880 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37881 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37882 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37883 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37884 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37885 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37886 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37887 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37888 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37889 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37890 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37891 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37892 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37893 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37894 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37895 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37896 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37897 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37898 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37899 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37900 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37901 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37902 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37903 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37904 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37905 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37906 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37907 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37908 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37909 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37910 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37911 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37912 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37913 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37914 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37915 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37916 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37917 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37918 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37919 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37920 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37921 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37922 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37923 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37924 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37925 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37926 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37927 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37928 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37929 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37930 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37931 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37932 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37933 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37934 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37935 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37936 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37937 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37938 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37939 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37940 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37941 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37942 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37943 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37944 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37945 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37946 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37947 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37948 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37949 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37950 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37951 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37952 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37953 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37954 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37955 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37956 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37957 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37958 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37959 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37960 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37961 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37962 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37963 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37964 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37965 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37966 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37967 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37968 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37969 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37970 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37971 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37972 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37973 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37974 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37975 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37976 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37977 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37978 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37979 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37980 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37981 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37982 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37983 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37984 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37985 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37986 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37987 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37988 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37989 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 37990 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 37991 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 37992 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 37993 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 37994 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 37995 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 37996 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 37997 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 37998 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 37999 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 38000 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 38001 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 38002 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 38003 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 38004 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 38005 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 38006 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 38007 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 38008 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 38009 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 38010 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 38011 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 38012 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 38013 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 38014 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 38015 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 38016 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 38017 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 38018 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 38019 |
+
batch tensor: tokens torch.Size([1, 131072])
|
| 38020 |
+
batch tensor: labels torch.Size([1, 131072])
|
| 38021 |
+
batch tensor: loss_mask torch.Size([1, 131072])
|
| 38022 |
+
batch tensor: attention_mask torch.Size([1, 1, 131072, 131072])
|
| 38023 |
+
batch tensor: position_ids torch.Size([1, 131072])
|
| 38024 |
+
batch tensor after cp: tokens torch.Size([1, 65536])
|
| 38025 |
+
batch tensor after cp: labels torch.Size([1, 65536])
|
| 38026 |
+
batch tensor after cp: loss_mask torch.Size([1, 65536])
|
| 38027 |
+
batch tensor after cp: attention_mask torch.Size([1, 1, 65536, 131072])
|
| 38028 |
+
batch tensor after cp: position_ids torch.Size([1, 65536])
|
| 38029 |
+
Start exporting trace 8
|
| 38030 |
+
Done exporting trace 8
|
| 38031 |
+
[2025-06-21 20:54:11] iteration 9/ 10 | consumed samples: 9 | elapsed time per iteration (ms): 39801.9 | learning rate: 0.000000E+00 | global batch size: 1 | loss scale: 16777216.0 | number of skipped iterations: 1 | number of nan iterations: 0 |
|